Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 2h1m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc0014dd188>: { error: <*errors.withMessage | 0xc0002ac3a0>{ cause: <*errors.errorString | 0xc0020cb1d0>{ s: "error container run failed with exit code 137", }, msg: "Unable to run conformance tests", }, stack: [0x1a98018, 0x1adc429, 0x7b9731, 0x7b9125, 0x7b87fb, 0x7be569, 0x7bdf52, 0x7df031, 0x7ded56, 0x7de3a5, 0x7e07e5, 0x7ec9c9, 0x7ec7de, 0x1af7d32, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 137 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-8m9snr INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-8m9snr" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-8jx80k" using the "upgrades-cgroupfs" template (Kubernetes v1.22.17, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-8jx80k --infrastructure (default) --kubernetes-version v1.22.17 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-8jx80k-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-8jx80k-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-8jx80k-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-8jx80k-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-8jx80k created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-8jx80k-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-8jx80k-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-8m9snr/k8s-upgrade-and-conformance-8jx80k-jsr69 to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-8m9snr/k8s-upgrade-and-conformance-8jx80k-jsr69 to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.15 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-8m9snr/k8s-upgrade-and-conformance-8jx80k-md-0-chlxb to be upgraded to v1.23.15 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.23.15 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-8m9snr/k8s-upgrade-and-conformance-8jx80k-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-8m9snr/k8s-upgrade-and-conformance-8jx80k-mp-0 to be upgraded from v1.22.17 to v1.23.15 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.23.15 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1673449473�[0m - Will randomize all specs Will run �[1m7052�[0m specs Running in parallel across �[1m4�[0m nodes Jan 11 15:04:36.866: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:04:36.868: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 11 15:04:36.882: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 11 15:04:36.914: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 11 15:04:36.914: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 11 15:04:36.914: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 11 15:04:36.919: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 11 15:04:36.919: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 11 15:04:36.919: INFO: e2e test version: v1.23.15 Jan 11 15:04:36.921: INFO: kube-apiserver version: v1.23.15 Jan 11 15:04:36.922: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:04:36.927: INFO: Cluster IP family: ipv4 Jan 11 15:04:36.932: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:04:36.947: INFO: Cluster IP family: ipv4 Jan 11 15:04:36.937: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:04:36.954: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 11 15:04:37.006: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:04:37.022: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:37.028: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services W0111 15:04:37.055008 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 11 15:04:37.055: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should delete a collection of services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a collection of services Jan 11 15:04:37.064: INFO: Creating e2e-svc-a-8cppf Jan 11 15:04:37.082: INFO: Creating e2e-svc-b-wgc5v Jan 11 15:04:37.107: INFO: Creating e2e-svc-c-fkrgr �[1mSTEP�[0m: deleting service collection Jan 11 15:04:37.262: INFO: Collection of services has been deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:04:37.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9869" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":1,"skipped":35,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:37.073: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition W0111 15:04:37.113516 20 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 11 15:04:37.113: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:04:37.137: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:04:40.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-2533" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:37.305: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:04:37.328: INFO: Creating ReplicaSet my-hostname-basic-1bddcfc7-06ba-4570-9198-17c1918a92ba Jan 11 15:04:37.338: INFO: Pod name my-hostname-basic-1bddcfc7-06ba-4570-9198-17c1918a92ba: Found 0 pods out of 1 Jan 11 15:04:42.344: INFO: Pod name my-hostname-basic-1bddcfc7-06ba-4570-9198-17c1918a92ba: Found 1 pods out of 1 Jan 11 15:04:42.344: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1bddcfc7-06ba-4570-9198-17c1918a92ba" is running Jan 11 15:04:42.348: INFO: Pod "my-hostname-basic-1bddcfc7-06ba-4570-9198-17c1918a92ba-vcxrd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 15:04:37 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 15:04:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 15:04:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 15:04:37 +0000 UTC Reason: Message:}]) Jan 11 15:04:42.348: INFO: Trying to dial the pod Jan 11 15:04:47.367: INFO: Controller my-hostname-basic-1bddcfc7-06ba-4570-9198-17c1918a92ba: Got expected result from replica 1 [my-hostname-basic-1bddcfc7-06ba-4570-9198-17c1918a92ba-vcxrd]: "my-hostname-basic-1bddcfc7-06ba-4570-9198-17c1918a92ba-vcxrd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:04:47.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-3415" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":2,"skipped":42,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:36.987: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl W0111 15:04:37.030314 16 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 11 15:04:37.030: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1573 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 11 15:04:37.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2107 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Jan 11 15:04:37.480: INFO: stderr: "" Jan 11 15:04:37.480: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Jan 11 15:04:42.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2107 get pod e2e-test-httpd-pod -o json' Jan 11 15:04:42.720: INFO: stderr: "" Jan 11 15:04:42.720: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2023-01-11T15:04:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2107\",\n \"resourceVersion\": \"2034\",\n \"uid\": \"f65539e4-d785-4ef5-931d-1330c8e36bff\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-ncdxt\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-8jx80k-worker-b15lfw\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-ncdxt\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-11T15:04:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-11T15:04:42Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-11T15:04:42Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-11T15:04:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c4e03ab6d390754f2d07833ff05412f0cee30d70f5958de45c38b46d345ebad1\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-01-11T15:04:41Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.2.2\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.2.2\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-01-11T15:04:37Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Jan 11 15:04:42.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2107 replace -f -' Jan 11 15:04:46.487: INFO: stderr: "" Jan 11 15:04:46.487: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 Jan 11 15:04:46.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2107 delete pods e2e-test-httpd-pod' Jan 11 15:04:49.247: INFO: stderr: "" Jan 11 15:04:49.247: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:04:49.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2107" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:49.303: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap configmap-636/configmap-test-4016b4a5-7242-441e-8462-267dca32d0b9 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 15:04:49.368: INFO: Waiting up to 5m0s for pod "pod-configmaps-588ab808-f625-40ce-a2c4-8c8a466f33a1" in namespace "configmap-636" to be "Succeeded or Failed" Jan 11 15:04:49.371: INFO: Pod "pod-configmaps-588ab808-f625-40ce-a2c4-8c8a466f33a1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.121327ms Jan 11 15:04:51.376: INFO: Pod "pod-configmaps-588ab808-f625-40ce-a2c4-8c8a466f33a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00777778s Jan 11 15:04:53.384: INFO: Pod "pod-configmaps-588ab808-f625-40ce-a2c4-8c8a466f33a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015769392s �[1mSTEP�[0m: Saw pod success Jan 11 15:04:53.384: INFO: Pod "pod-configmaps-588ab808-f625-40ce-a2c4-8c8a466f33a1" satisfied condition "Succeeded or Failed" Jan 11 15:04:53.388: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod pod-configmaps-588ab808-f625-40ce-a2c4-8c8a466f33a1 container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:04:53.426: INFO: Waiting for pod pod-configmaps-588ab808-f625-40ce-a2c4-8c8a466f33a1 to disappear Jan 11 15:04:53.431: INFO: Pod pod-configmaps-588ab808-f625-40ce-a2c4-8c8a466f33a1 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:04:53.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-636" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:47.382: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslicemirroring �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: mirroring a new custom Endpoint Jan 11 15:04:47.438: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 �[1mSTEP�[0m: mirroring an update to a custom Endpoint Jan 11 15:04:49.451: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 �[1mSTEP�[0m: mirroring deletion of a custom Endpoint Jan 11 15:04:51.464: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:04:53.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslicemirroring-9161" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":3,"skipped":43,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:53.511: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override command Jan 11 15:04:53.563: INFO: Waiting up to 5m0s for pod "client-containers-4d4ad583-183c-4436-8aff-7c7ca069e4c0" in namespace "containers-4609" to be "Succeeded or Failed" Jan 11 15:04:53.568: INFO: Pod "client-containers-4d4ad583-183c-4436-8aff-7c7ca069e4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206038ms Jan 11 15:04:55.573: INFO: Pod "client-containers-4d4ad583-183c-4436-8aff-7c7ca069e4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009496918s Jan 11 15:04:57.579: INFO: Pod "client-containers-4d4ad583-183c-4436-8aff-7c7ca069e4c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015013123s �[1mSTEP�[0m: Saw pod success Jan 11 15:04:57.579: INFO: Pod "client-containers-4d4ad583-183c-4436-8aff-7c7ca069e4c0" satisfied condition "Succeeded or Failed" Jan 11 15:04:57.583: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod client-containers-4d4ad583-183c-4436-8aff-7c7ca069e4c0 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:04:57.597: INFO: Waiting for pod client-containers-4d4ad583-183c-4436-8aff-7c7ca069e4c0 to disappear Jan 11 15:04:57.607: INFO: Pod client-containers-4d4ad583-183c-4436-8aff-7c7ca069e4c0 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:04:57.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-4609" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":54,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:57.654: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 11 15:04:57.689: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-4391 1aa25c94-f03f-415c-a435-7f525266c7ed 2243 0 2023-01-11 15:04:57 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2023-01-11 15:04:57 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v6wnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v6wnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:04:57.694: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:04:59.699: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) �[1mSTEP�[0m: Verifying customized DNS suffix list is configured on pod... Jan 11 15:04:59.699: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4391 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:04:59.699: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:04:59.700: INFO: ExecWithOptions: Clientset creation Jan 11 15:04:59.700: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/dns-4391/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: Verifying customized DNS server is configured on pod... Jan 11 15:04:59.795: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4391 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:04:59.795: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:04:59.796: INFO: ExecWithOptions: Clientset creation Jan 11 15:04:59.796: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/dns-4391/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:04:59.895: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:04:59.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-4391" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":5,"skipped":74,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:59.933: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running Jan 11 15:05:01.996: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:04.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-5883" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":6,"skipped":85,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:04.018: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: updating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: patching the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:08.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-4118" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":7,"skipped":87,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:40.354: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-9527 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 11 15:04:40.383: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 15:04:40.453: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:04:42.464: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:04:44.461: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:04:46.459: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 15:04:48.459: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 15:04:50.457: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 15:04:52.462: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 15:04:54.457: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 15:04:56.460: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 15:04:58.457: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 15:05:00.458: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 15:05:02.460: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 11 15:05:02.469: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 11 15:05:02.476: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 11 15:05:02.483: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 11 15:05:04.522: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 11 15:05:04.522: INFO: Going to poll 192.168.1.3 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 11 15:05:04.526: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.1.3 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9527 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:05:04.526: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:05:04.528: INFO: ExecWithOptions: Clientset creation Jan 11 15:05:04.528: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-9527/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.1.3+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:05:05.605: INFO: Found all 1 expected endpoints: [netserver-0] Jan 11 15:05:05.606: INFO: Going to poll 192.168.0.2 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 11 15:05:05.609: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9527 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:05:05.609: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:05:05.610: INFO: ExecWithOptions: Clientset creation Jan 11 15:05:05.610: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-9527/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.0.2+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:05:06.706: INFO: Found all 1 expected endpoints: [netserver-1] Jan 11 15:05:06.706: INFO: Going to poll 192.168.2.3 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 11 15:05:06.709: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.2.3 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9527 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:05:06.710: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:05:06.710: INFO: ExecWithOptions: Clientset creation Jan 11 15:05:06.710: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-9527/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.2.3+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:05:07.788: INFO: Found all 1 expected endpoints: [netserver-2] Jan 11 15:05:07.788: INFO: Going to poll 192.168.6.3 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 11 15:05:07.792: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.6.3 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9527 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:05:07.792: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:05:07.793: INFO: ExecWithOptions: Clientset creation Jan 11 15:05:07.793: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-9527/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.6.3+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:05:08.882: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:08.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-9527" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:08.116: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Updating PodDisruptionBudget status �[1mSTEP�[0m: Waiting for all pods to be running Jan 11 15:05:10.174: INFO: running pods: 0 < 1 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Patching PodDisruptionBudget status �[1mSTEP�[0m: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:12.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-6990" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":8,"skipped":89,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:08.910: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test substitution in volume subpath Jan 11 15:05:08.941: INFO: Waiting up to 5m0s for pod "var-expansion-bf637fe9-85a9-426e-85eb-b2405e5c74c1" in namespace "var-expansion-7133" to be "Succeeded or Failed" Jan 11 15:05:08.947: INFO: Pod "var-expansion-bf637fe9-85a9-426e-85eb-b2405e5c74c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4221ms Jan 11 15:05:10.954: INFO: Pod "var-expansion-bf637fe9-85a9-426e-85eb-b2405e5c74c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012796596s Jan 11 15:05:12.959: INFO: Pod "var-expansion-bf637fe9-85a9-426e-85eb-b2405e5c74c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017976932s �[1mSTEP�[0m: Saw pod success Jan 11 15:05:12.959: INFO: Pod "var-expansion-bf637fe9-85a9-426e-85eb-b2405e5c74c1" satisfied condition "Succeeded or Failed" Jan 11 15:05:12.963: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod var-expansion-bf637fe9-85a9-426e-85eb-b2405e5c74c1 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:05:12.982: INFO: Waiting for pod var-expansion-bf637fe9-85a9-426e-85eb-b2405e5c74c1 to disappear Jan 11 15:05:12.985: INFO: Pod var-expansion-bf637fe9-85a9-426e-85eb-b2405e5c74c1 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:12.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-7133" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:13.038: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 11 15:05:17.109: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:17.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-362" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":61,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:12.314: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption-release is created Jan 11 15:05:12.350: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:05:14.375: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:05:16.356: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:05:18.355: INFO: The status of Pod pod-adoption-release is Running (Ready = true) �[1mSTEP�[0m: When a replicaset with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted �[1mSTEP�[0m: When the matched label of one of its pods change Jan 11 15:05:19.376: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:20.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-2255" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":9,"skipped":142,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:20.526: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:20.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9067" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":10,"skipped":185,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:17.153: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jan 11 15:05:17.202: INFO: Waiting up to 5m0s for pod "security-context-ad90ee99-04d7-44bc-afc6-2f2afb13b5fb" in namespace "security-context-9364" to be "Succeeded or Failed" Jan 11 15:05:17.208: INFO: Pod "security-context-ad90ee99-04d7-44bc-afc6-2f2afb13b5fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.237094ms Jan 11 15:05:19.213: INFO: Pod "security-context-ad90ee99-04d7-44bc-afc6-2f2afb13b5fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01101015s Jan 11 15:05:21.219: INFO: Pod "security-context-ad90ee99-04d7-44bc-afc6-2f2afb13b5fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017633751s �[1mSTEP�[0m: Saw pod success Jan 11 15:05:21.220: INFO: Pod "security-context-ad90ee99-04d7-44bc-afc6-2f2afb13b5fb" satisfied condition "Succeeded or Failed" Jan 11 15:05:21.232: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod security-context-ad90ee99-04d7-44bc-afc6-2f2afb13b5fb container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:05:21.258: INFO: Waiting for pod security-context-ad90ee99-04d7-44bc-afc6-2f2afb13b5fb to disappear Jan 11 15:05:21.264: INFO: Pod security-context-ad90ee99-04d7-44bc-afc6-2f2afb13b5fb no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:21.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-9364" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":66,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:53.475: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status captures configMap creation �[1mSTEP�[0m: Deleting a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:21.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-7850" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":3,"skipped":47,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:21.738: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:21.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-2208" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":4,"skipped":78,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:20.694: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pods Jan 11 15:05:20.767: INFO: created test-pod-1 Jan 11 15:05:20.817: INFO: created test-pod-2 Jan 11 15:05:20.860: INFO: created test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be running Jan 11 15:05:20.860: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-76' to be running and ready Jan 11 15:05:20.879: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 15:05:20.879: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 15:05:20.879: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 15:05:20.879: INFO: 0 / 3 pods in namespace 'pods-76' are running and ready (0 seconds elapsed) Jan 11 15:05:20.879: INFO: expected 0 pod replicas in namespace 'pods-76', 0 are Running and Ready. Jan 11 15:05:20.879: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 15:05:20.879: INFO: test-pod-1 k8s-upgrade-and-conformance-8jx80k-worker-b15lfw Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:20 +0000 UTC }] Jan 11 15:05:20.879: INFO: test-pod-2 k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:20 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:20 +0000 UTC }] Jan 11 15:05:20.879: INFO: test-pod-3 k8s-upgrade-and-conformance-8jx80k-worker-b15lfw Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:20 +0000 UTC }] Jan 11 15:05:20.879: INFO: Jan 11 15:05:22.903: INFO: 3 / 3 pods in namespace 'pods-76' are running and ready (2 seconds elapsed) Jan 11 15:05:22.903: INFO: expected 0 pod replicas in namespace 'pods-76', 0 are Running and Ready. �[1mSTEP�[0m: waiting for all pods to be deleted Jan 11 15:05:22.932: INFO: Pod quantity 3 is different from expected quantity 0 Jan 11 15:05:23.939: INFO: Pod quantity 3 is different from expected quantity 0 Jan 11 15:05:24.938: INFO: Pod quantity 3 is different from expected quantity 0 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:25.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-76" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":11,"skipped":205,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:21.397: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-fb01d448-052b-4325-8c7d-0694b6a9f7a8 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 15:05:21.466: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-239ec932-cd98-405c-bb97-0d4720cf4d85" in namespace "projected-6928" to be "Succeeded or Failed" Jan 11 15:05:21.471: INFO: Pod "pod-projected-configmaps-239ec932-cd98-405c-bb97-0d4720cf4d85": Phase="Pending", Reason="", readiness=false. Elapsed: 5.494975ms Jan 11 15:05:23.480: INFO: Pod "pod-projected-configmaps-239ec932-cd98-405c-bb97-0d4720cf4d85": Phase="Running", Reason="", readiness=true. Elapsed: 2.013769093s Jan 11 15:05:25.485: INFO: Pod "pod-projected-configmaps-239ec932-cd98-405c-bb97-0d4720cf4d85": Phase="Running", Reason="", readiness=false. Elapsed: 4.01953681s Jan 11 15:05:27.510: INFO: Pod "pod-projected-configmaps-239ec932-cd98-405c-bb97-0d4720cf4d85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044680839s �[1mSTEP�[0m: Saw pod success Jan 11 15:05:27.511: INFO: Pod "pod-projected-configmaps-239ec932-cd98-405c-bb97-0d4720cf4d85" satisfied condition "Succeeded or Failed" Jan 11 15:05:27.526: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4 pod pod-projected-configmaps-239ec932-cd98-405c-bb97-0d4720cf4d85 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:05:29.188: INFO: Waiting for pod pod-projected-configmaps-239ec932-cd98-405c-bb97-0d4720cf4d85 to disappear Jan 11 15:05:29.227: INFO: Pod pod-projected-configmaps-239ec932-cd98-405c-bb97-0d4720cf4d85 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:29.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6928" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":108,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:26.157: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc1 �[1mSTEP�[0m: create the rc2 �[1mSTEP�[0m: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[1mSTEP�[0m: delete the rc simpletest-rc-to-be-deleted �[1mSTEP�[0m: wait for the rc to be deleted Jan 11 15:05:37.985: INFO: 75 pods remaining Jan 11 15:05:37.985: INFO: 75 pods has nil DeletionTimestamp Jan 11 15:05:37.985: INFO: �[1mSTEP�[0m: Gathering metrics Jan 11 15:05:43.003: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8 is Running (Ready = true) Jan 11 15:05:43.144: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 11 15:05:43.144: INFO: Deleting pod "simpletest-rc-to-be-deleted-275xz" in namespace "gc-8312" Jan 11 15:05:43.162: INFO: Deleting pod "simpletest-rc-to-be-deleted-2bjnt" in namespace "gc-8312" Jan 11 15:05:43.179: INFO: Deleting pod "simpletest-rc-to-be-deleted-2hfwp" in namespace "gc-8312" Jan 11 15:05:43.196: INFO: Deleting pod "simpletest-rc-to-be-deleted-2lv6v" in namespace "gc-8312" Jan 11 15:05:43.223: INFO: Deleting pod "simpletest-rc-to-be-deleted-2sdx5" in namespace "gc-8312" Jan 11 15:05:43.244: INFO: Deleting pod "simpletest-rc-to-be-deleted-49jdc" in namespace "gc-8312" Jan 11 15:05:43.300: INFO: Deleting pod "simpletest-rc-to-be-deleted-4djjg" in namespace "gc-8312" Jan 11 15:05:43.337: INFO: Deleting pod "simpletest-rc-to-be-deleted-4dmkr" in namespace "gc-8312" Jan 11 15:05:43.368: INFO: Deleting pod "simpletest-rc-to-be-deleted-4dvwn" in namespace "gc-8312" Jan 11 15:05:43.409: INFO: Deleting pod "simpletest-rc-to-be-deleted-4jltz" in namespace "gc-8312" Jan 11 15:05:43.427: INFO: Deleting pod "simpletest-rc-to-be-deleted-4x446" in namespace "gc-8312" Jan 11 15:05:43.458: INFO: Deleting pod "simpletest-rc-to-be-deleted-52tsr" in namespace "gc-8312" Jan 11 15:05:43.499: INFO: Deleting pod "simpletest-rc-to-be-deleted-5fq47" in namespace "gc-8312" Jan 11 15:05:43.518: INFO: Deleting pod "simpletest-rc-to-be-deleted-5l47q" in namespace "gc-8312" Jan 11 15:05:43.541: INFO: Deleting pod "simpletest-rc-to-be-deleted-5q8n7" in namespace "gc-8312" Jan 11 15:05:43.559: INFO: Deleting pod "simpletest-rc-to-be-deleted-5t924" in namespace "gc-8312" Jan 11 15:05:43.580: INFO: Deleting pod "simpletest-rc-to-be-deleted-6m4sx" in namespace "gc-8312" Jan 11 15:05:43.624: INFO: Deleting pod "simpletest-rc-to-be-deleted-78zzz" in namespace "gc-8312" Jan 11 15:05:43.681: INFO: Deleting pod "simpletest-rc-to-be-deleted-7t25n" in namespace "gc-8312" Jan 11 15:05:43.714: INFO: Deleting pod "simpletest-rc-to-be-deleted-8cghd" in namespace "gc-8312" Jan 11 15:05:43.744: INFO: Deleting pod "simpletest-rc-to-be-deleted-8dxvk" in namespace "gc-8312" Jan 11 15:05:43.769: INFO: Deleting pod "simpletest-rc-to-be-deleted-8hmqh" in namespace "gc-8312" Jan 11 15:05:43.790: INFO: Deleting pod "simpletest-rc-to-be-deleted-8sfxl" in namespace "gc-8312" Jan 11 15:05:43.807: INFO: Deleting pod "simpletest-rc-to-be-deleted-8v7f7" in namespace "gc-8312" Jan 11 15:05:43.822: INFO: Deleting pod "simpletest-rc-to-be-deleted-8xdqx" in namespace "gc-8312" Jan 11 15:05:43.850: INFO: Deleting pod "simpletest-rc-to-be-deleted-9crqj" in namespace "gc-8312" Jan 11 15:05:43.874: INFO: Deleting pod "simpletest-rc-to-be-deleted-9ggjb" in namespace "gc-8312" Jan 11 15:05:43.896: INFO: Deleting pod "simpletest-rc-to-be-deleted-9kq92" in namespace "gc-8312" Jan 11 15:05:43.932: INFO: Deleting pod "simpletest-rc-to-be-deleted-9vtq6" in namespace "gc-8312" Jan 11 15:05:43.975: INFO: Deleting pod "simpletest-rc-to-be-deleted-9zp45" in namespace "gc-8312" Jan 11 15:05:43.994: INFO: Deleting pod "simpletest-rc-to-be-deleted-b44nj" in namespace "gc-8312" Jan 11 15:05:44.009: INFO: Deleting pod "simpletest-rc-to-be-deleted-b5gsr" in namespace "gc-8312" Jan 11 15:05:44.036: INFO: Deleting pod "simpletest-rc-to-be-deleted-b6rls" in namespace "gc-8312" Jan 11 15:05:44.055: INFO: Deleting pod "simpletest-rc-to-be-deleted-bkwt6" in namespace "gc-8312" Jan 11 15:05:44.065: INFO: Deleting pod "simpletest-rc-to-be-deleted-cnlj5" in namespace "gc-8312" Jan 11 15:05:44.089: INFO: Deleting pod "simpletest-rc-to-be-deleted-cslhx" in namespace "gc-8312" Jan 11 15:05:44.113: INFO: Deleting pod "simpletest-rc-to-be-deleted-d8fxw" in namespace "gc-8312" Jan 11 15:05:44.134: INFO: Deleting pod "simpletest-rc-to-be-deleted-dnj87" in namespace "gc-8312" Jan 11 15:05:44.163: INFO: Deleting pod "simpletest-rc-to-be-deleted-f28gc" in namespace "gc-8312" Jan 11 15:05:44.194: INFO: Deleting pod "simpletest-rc-to-be-deleted-fhdjg" in namespace "gc-8312" Jan 11 15:05:44.218: INFO: Deleting pod "simpletest-rc-to-be-deleted-flfkc" in namespace "gc-8312" Jan 11 15:05:44.276: INFO: Deleting pod "simpletest-rc-to-be-deleted-flrhg" in namespace "gc-8312" Jan 11 15:05:44.314: INFO: Deleting pod "simpletest-rc-to-be-deleted-fpk6f" in namespace "gc-8312" Jan 11 15:05:44.355: INFO: Deleting pod "simpletest-rc-to-be-deleted-frpss" in namespace "gc-8312" Jan 11 15:05:44.389: INFO: Deleting pod "simpletest-rc-to-be-deleted-g5prb" in namespace "gc-8312" Jan 11 15:05:44.430: INFO: Deleting pod "simpletest-rc-to-be-deleted-gjpzc" in namespace "gc-8312" Jan 11 15:05:44.492: INFO: Deleting pod "simpletest-rc-to-be-deleted-gkmbt" in namespace "gc-8312" Jan 11 15:05:44.521: INFO: Deleting pod "simpletest-rc-to-be-deleted-h9747" in namespace "gc-8312" Jan 11 15:05:44.550: INFO: Deleting pod "simpletest-rc-to-be-deleted-hsqmk" in namespace "gc-8312" Jan 11 15:05:44.573: INFO: Deleting pod "simpletest-rc-to-be-deleted-hvwd8" in namespace "gc-8312" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:44.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-8312" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":12,"skipped":295,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:29.519: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for all pods to be garbage collected �[1mSTEP�[0m: expected 0 pods, got 2 pods �[1mSTEP�[0m: Gathering metrics Jan 11 15:05:44.777: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8 is Running (Ready = true) Jan 11 15:05:45.041: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:45.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-8209" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":7,"skipped":116,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:44.874: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io/v1 discovery document �[1mSTEP�[0m: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:45.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-8188" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":13,"skipped":362,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:21.834: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:05:21.878: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 11 15:05:26.886: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: Scaling up "test-rs" replicaset Jan 11 15:05:26.927: INFO: Updating replica set "test-rs" �[1mSTEP�[0m: patching the ReplicaSet Jan 11 15:05:26.960: INFO: observed ReplicaSet test-rs in namespace replicaset-789 with ReadyReplicas 1, AvailableReplicas 1 Jan 11 15:05:27.051: INFO: observed ReplicaSet test-rs in namespace replicaset-789 with ReadyReplicas 1, AvailableReplicas 1 Jan 11 15:05:27.220: INFO: observed ReplicaSet test-rs in namespace replicaset-789 with ReadyReplicas 1, AvailableReplicas 1 Jan 11 15:05:27.256: INFO: observed ReplicaSet test-rs in namespace replicaset-789 with ReadyReplicas 1, AvailableReplicas 1 Jan 11 15:05:41.133: INFO: observed ReplicaSet test-rs in namespace replicaset-789 with ReadyReplicas 2, AvailableReplicas 2 Jan 11 15:05:50.944: INFO: observed Replicaset test-rs in namespace replicaset-789 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:50.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-789" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":5,"skipped":97,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:45.215: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 15:05:45.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17b50a9e-815d-429a-aaa2-e15deec2a470" in namespace "downward-api-8672" to be "Succeeded or Failed" Jan 11 15:05:45.454: INFO: Pod "downwardapi-volume-17b50a9e-815d-429a-aaa2-e15deec2a470": Phase="Pending", Reason="", readiness=false. Elapsed: 10.132551ms Jan 11 15:05:47.459: INFO: Pod "downwardapi-volume-17b50a9e-815d-429a-aaa2-e15deec2a470": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014754026s Jan 11 15:05:49.464: INFO: Pod "downwardapi-volume-17b50a9e-815d-429a-aaa2-e15deec2a470": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020124664s Jan 11 15:05:51.472: INFO: Pod "downwardapi-volume-17b50a9e-815d-429a-aaa2-e15deec2a470": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028085899s �[1mSTEP�[0m: Saw pod success Jan 11 15:05:51.473: INFO: Pod "downwardapi-volume-17b50a9e-815d-429a-aaa2-e15deec2a470" satisfied condition "Succeeded or Failed" Jan 11 15:05:51.476: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod downwardapi-volume-17b50a9e-815d-429a-aaa2-e15deec2a470 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:05:51.511: INFO: Waiting for pod downwardapi-volume-17b50a9e-815d-429a-aaa2-e15deec2a470 to disappear Jan 11 15:05:51.514: INFO: Pod downwardapi-volume-17b50a9e-815d-429a-aaa2-e15deec2a470 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:51.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8672" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":366,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:51.532: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-96b23016-fa1f-449d-a291-e12959b0a564 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 15:05:51.574: INFO: Waiting up to 5m0s for pod "pod-secrets-b8f33853-3720-4b48-b86a-e16fe1840c9d" in namespace "secrets-824" to be "Succeeded or Failed" Jan 11 15:05:51.576: INFO: Pod "pod-secrets-b8f33853-3720-4b48-b86a-e16fe1840c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.652336ms Jan 11 15:05:53.582: INFO: Pod "pod-secrets-b8f33853-3720-4b48-b86a-e16fe1840c9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00794697s Jan 11 15:05:55.587: INFO: Pod "pod-secrets-b8f33853-3720-4b48-b86a-e16fe1840c9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013223871s �[1mSTEP�[0m: Saw pod success Jan 11 15:05:55.587: INFO: Pod "pod-secrets-b8f33853-3720-4b48-b86a-e16fe1840c9d" satisfied condition "Succeeded or Failed" Jan 11 15:05:55.591: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod pod-secrets-b8f33853-3720-4b48-b86a-e16fe1840c9d container secret-env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:05:55.610: INFO: Waiting for pod pod-secrets-b8f33853-3720-4b48-b86a-e16fe1840c9d to disappear Jan 11 15:05:55.615: INFO: Pod pod-secrets-b8f33853-3720-4b48-b86a-e16fe1840c9d no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:55.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-824" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":367,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:50.992: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:05:51.020: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 11 15:05:53.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6378 --namespace=crd-publish-openapi-6378 create -f -' Jan 11 15:05:54.699: INFO: stderr: "" Jan 11 15:05:54.699: INFO: stdout: "e2e-test-crd-publish-openapi-3068-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 11 15:05:54.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6378 --namespace=crd-publish-openapi-6378 delete e2e-test-crd-publish-openapi-3068-crds test-cr' Jan 11 15:05:54.787: INFO: stderr: "" Jan 11 15:05:54.787: INFO: stdout: "e2e-test-crd-publish-openapi-3068-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 11 15:05:54.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6378 --namespace=crd-publish-openapi-6378 apply -f -' Jan 11 15:05:54.992: INFO: stderr: "" Jan 11 15:05:54.992: INFO: stdout: "e2e-test-crd-publish-openapi-3068-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 11 15:05:54.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6378 --namespace=crd-publish-openapi-6378 delete e2e-test-crd-publish-openapi-3068-crds test-cr' Jan 11 15:05:55.079: INFO: stderr: "" Jan 11 15:05:55.079: INFO: stdout: "e2e-test-crd-publish-openapi-3068-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 11 15:05:55.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6378 explain e2e-test-crd-publish-openapi-3068-crds' Jan 11 15:05:55.273: INFO: stderr: "" Jan 11 15:05:55.273: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3068-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:05:59.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-6378" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":6,"skipped":114,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:06:00.006: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:06:00.034: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 11 15:06:04.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7017 --namespace=crd-publish-openapi-7017 create -f -' Jan 11 15:06:05.681: INFO: stderr: "" Jan 11 15:06:05.681: INFO: stdout: "e2e-test-crd-publish-openapi-7913-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 11 15:06:05.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7017 --namespace=crd-publish-openapi-7017 delete e2e-test-crd-publish-openapi-7913-crds test-cr' Jan 11 15:06:05.754: INFO: stderr: "" Jan 11 15:06:05.754: INFO: stdout: "e2e-test-crd-publish-openapi-7913-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 11 15:06:05.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7017 --namespace=crd-publish-openapi-7017 apply -f -' Jan 11 15:06:06.001: INFO: stderr: "" Jan 11 15:06:06.001: INFO: stdout: "e2e-test-crd-publish-openapi-7913-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 11 15:06:06.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7017 --namespace=crd-publish-openapi-7017 delete e2e-test-crd-publish-openapi-7913-crds test-cr' Jan 11 15:06:06.081: INFO: stderr: "" Jan 11 15:06:06.081: INFO: stdout: "e2e-test-crd-publish-openapi-7913-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 11 15:06:06.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7017 explain e2e-test-crd-publish-openapi-7913-crds' Jan 11 15:06:06.287: INFO: stderr: "" Jan 11 15:06:06.287: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7913-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:06:08.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7017" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":7,"skipped":114,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:45.185: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:05:45.785: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 11 15:05:47.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 11, 15, 5, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 5, 45, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 5, 45, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 5, 45, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-67c86bcf4b\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:05:50.820: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:05:50.825: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:06:03.402: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-2608-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-5047.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout Jan 11 15:06:13.509: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-2608-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-5047.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout Jan 11 15:06:23.610: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-2608-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-5047.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout Jan 11 15:06:33.712: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-2608-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-5047.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout Jan 11 15:06:43.717: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-2608-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-5047.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout Jan 11 15:06:43.717: FAIL: Unexpected error: <*errors.errorString | 0xc0002442b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.waitWebhookConversionReady(0xc0009d4f20, 0xc0036d2000, 0xc0036b90b0, {0x70cc3cc, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:477 +0xf3 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:206 +0x113 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005f8340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:06:44.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-5047" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[91m�[1m• Failure [59.109 seconds]�[0m [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to convert a non homogeneous list of CRs [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:06:43.717: Unexpected error: <*errors.errorString | 0xc0002442b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:477 �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:05:55.703: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-9981 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-9981 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-9981 Jan 11 15:05:55.746: INFO: Found 0 stateful pods, waiting for 1 Jan 11 15:06:05.751: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 11 15:06:05.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9981 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 15:06:05.949: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 15:06:05.949: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 15:06:05.949: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 15:06:05.954: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 11 15:06:15.960: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 15:06:15.960: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 15:06:15.979: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 15:06:15.979: INFO: ss-0 k8s-upgrade-and-conformance-8jx80k-worker-r73y4c Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:55 +0000 UTC }] Jan 11 15:06:15.979: INFO: Jan 11 15:06:15.979: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 11 15:06:16.983: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995784805s Jan 11 15:06:17.988: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991011162s Jan 11 15:06:18.993: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985902311s Jan 11 15:06:19.998: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980596657s Jan 11 15:06:21.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97628348s Jan 11 15:06:22.011: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968514229s Jan 11 15:06:23.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963049869s Jan 11 15:06:24.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958181908s Jan 11 15:06:25.027: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.227197ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9981 Jan 11 15:06:26.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9981 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 15:06:26.191: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 15:06:26.192: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 15:06:26.192: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 15:06:26.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9981 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 15:06:26.368: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 11 15:06:26.369: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 15:06:26.369: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 15:06:26.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9981 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 15:06:26.529: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 11 15:06:26.529: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 15:06:26.529: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 15:06:26.532: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jan 11 15:06:36.539: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 15:06:36.539: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 15:06:36.539: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Scale down will not halt with unhealthy stateful pod Jan 11 15:06:36.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9981 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 15:06:36.706: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 15:06:36.706: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 15:06:36.706: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 15:06:36.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9981 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 15:06:36.876: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 15:06:36.876: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 15:06:36.876: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 15:06:36.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9981 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 15:06:37.031: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 15:06:37.031: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 15:06:37.031: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 15:06:37.031: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 15:06:37.036: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jan 11 15:06:47.048: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 15:06:47.048: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 11 15:06:47.048: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 11 15:06:47.063: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 15:06:47.063: INFO: ss-0 k8s-upgrade-and-conformance-8jx80k-worker-r73y4c Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:55 +0000 UTC }] Jan 11 15:06:47.063: INFO: ss-1 k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:15 +0000 UTC }] Jan 11 15:06:47.063: INFO: ss-2 k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:15 +0000 UTC }] Jan 11 15:06:47.063: INFO: Jan 11 15:06:47.063: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 15:06:48.068: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 15:06:48.068: INFO: ss-0 k8s-upgrade-and-conformance-8jx80k-worker-r73y4c Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:06:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:05:55 +0000 UTC }] Jan 11 15:06:48.068: INFO: Jan 11 15:06:48.068: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 11 15:06:49.074: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.989851406s Jan 11 15:06:50.078: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.984949771s Jan 11 15:06:51.083: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.980815276s Jan 11 15:06:52.088: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.974999035s Jan 11 15:06:53.092: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.971212909s Jan 11 15:06:54.097: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.966326445s Jan 11 15:06:55.102: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.962144694s Jan 11 15:06:56.106: INFO: Verifying statefulset ss doesn't scale past 0 for another 956.715294ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9981 Jan 11 15:06:57.111: INFO: Scaling statefulset ss to 0 Jan 11 15:06:57.124: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 11 15:06:57.130: INFO: Deleting all statefulset in ns statefulset-9981 Jan 11 15:06:57.134: INFO: Scaling statefulset ss to 0 Jan 11 15:06:57.149: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 15:06:57.153: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:06:57.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-9981" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":16,"skipped":402,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:06:57.293: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 15:06:57.324: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29182c44-ad45-481e-87ef-856d12d8c7c7" in namespace "projected-1649" to be "Succeeded or Failed" Jan 11 15:06:57.329: INFO: Pod "downwardapi-volume-29182c44-ad45-481e-87ef-856d12d8c7c7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.485894ms Jan 11 15:06:59.334: INFO: Pod "downwardapi-volume-29182c44-ad45-481e-87ef-856d12d8c7c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010516791s Jan 11 15:07:01.338: INFO: Pod "downwardapi-volume-29182c44-ad45-481e-87ef-856d12d8c7c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014661651s �[1mSTEP�[0m: Saw pod success Jan 11 15:07:01.339: INFO: Pod "downwardapi-volume-29182c44-ad45-481e-87ef-856d12d8c7c7" satisfied condition "Succeeded or Failed" Jan 11 15:07:01.342: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2 pod downwardapi-volume-29182c44-ad45-481e-87ef-856d12d8c7c7 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:07:01.374: INFO: Waiting for pod downwardapi-volume-29182c44-ad45-481e-87ef-856d12d8c7c7 to disappear Jan 11 15:07:01.377: INFO: Pod downwardapi-volume-29182c44-ad45-481e-87ef-856d12d8c7c7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:07:01.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1649" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":471,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:07:01.392: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 15:07:01.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5973fe4c-6e53-4215-8761-89488b9fece3" in namespace "projected-4994" to be "Succeeded or Failed" Jan 11 15:07:01.434: INFO: Pod "downwardapi-volume-5973fe4c-6e53-4215-8761-89488b9fece3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356221ms Jan 11 15:07:03.438: INFO: Pod "downwardapi-volume-5973fe4c-6e53-4215-8761-89488b9fece3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008716418s Jan 11 15:07:05.443: INFO: Pod "downwardapi-volume-5973fe4c-6e53-4215-8761-89488b9fece3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013274226s �[1mSTEP�[0m: Saw pod success Jan 11 15:07:05.443: INFO: Pod "downwardapi-volume-5973fe4c-6e53-4215-8761-89488b9fece3" satisfied condition "Succeeded or Failed" Jan 11 15:07:05.445: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2 pod downwardapi-volume-5973fe4c-6e53-4215-8761-89488b9fece3 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:07:05.462: INFO: Waiting for pod downwardapi-volume-5973fe4c-6e53-4215-8761-89488b9fece3 to disappear Jan 11 15:07:05.466: INFO: Pod downwardapi-volume-5973fe4c-6e53-4215-8761-89488b9fece3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:07:05.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4994" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":472,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:07:05.501: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 15:07:05.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abfffb41-de84-468e-9e56-7da657cec0a2" in namespace "downward-api-6775" to be "Succeeded or Failed" Jan 11 15:07:05.533: INFO: Pod "downwardapi-volume-abfffb41-de84-468e-9e56-7da657cec0a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.966793ms Jan 11 15:07:07.539: INFO: Pod "downwardapi-volume-abfffb41-de84-468e-9e56-7da657cec0a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008919649s Jan 11 15:07:09.543: INFO: Pod "downwardapi-volume-abfffb41-de84-468e-9e56-7da657cec0a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012815085s �[1mSTEP�[0m: Saw pod success Jan 11 15:07:09.543: INFO: Pod "downwardapi-volume-abfffb41-de84-468e-9e56-7da657cec0a2" satisfied condition "Succeeded or Failed" Jan 11 15:07:09.546: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4 pod downwardapi-volume-abfffb41-de84-468e-9e56-7da657cec0a2 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:07:09.568: INFO: Waiting for pod downwardapi-volume-abfffb41-de84-468e-9e56-7da657cec0a2 to disappear Jan 11 15:07:09.570: INFO: Pod downwardapi-volume-abfffb41-de84-468e-9e56-7da657cec0a2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:07:09.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6775" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":487,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:07:09.685: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-fedae928-05fa-4c79-adde-2814670a348b �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 15:07:09.721: INFO: Waiting up to 5m0s for pod "pod-configmaps-a637a46b-3615-41ed-b69a-3cabf9aee4af" in namespace "configmap-275" to be "Succeeded or Failed" Jan 11 15:07:09.724: INFO: Pod "pod-configmaps-a637a46b-3615-41ed-b69a-3cabf9aee4af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.504894ms Jan 11 15:07:11.730: INFO: Pod "pod-configmaps-a637a46b-3615-41ed-b69a-3cabf9aee4af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008635611s Jan 11 15:07:13.734: INFO: Pod "pod-configmaps-a637a46b-3615-41ed-b69a-3cabf9aee4af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013092743s �[1mSTEP�[0m: Saw pod success Jan 11 15:07:13.734: INFO: Pod "pod-configmaps-a637a46b-3615-41ed-b69a-3cabf9aee4af" satisfied condition "Succeeded or Failed" Jan 11 15:07:13.737: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod pod-configmaps-a637a46b-3615-41ed-b69a-3cabf9aee4af container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:07:13.755: INFO: Waiting for pod pod-configmaps-a637a46b-3615-41ed-b69a-3cabf9aee4af to disappear Jan 11 15:07:13.759: INFO: Pod pod-configmaps-a637a46b-3615-41ed-b69a-3cabf9aee4af no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:07:13.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-275" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":568,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:07:13.785: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:07:13.813: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:07:14.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-8837" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":21,"skipped":575,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:07:14.386: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod Jan 11 15:07:14.409: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:07:18.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-5693" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":22,"skipped":590,"failed":0} �[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":7,"skipped":124,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:06:44.296: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:06:44.651: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:06:47.675: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:06:47.678: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:07:00.338: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-9105-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-1766.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout Jan 11 15:07:10.445: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-9105-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-1766.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout Jan 11 15:07:20.545: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-9105-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-1766.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout Jan 11 15:07:30.649: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-9105-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-1766.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout Jan 11 15:07:40.655: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-9105-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-1766.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout Jan 11 15:07:40.655: FAIL: Unexpected error: <*errors.errorString | 0xc0002442b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.waitWebhookConversionReady(0xc0009d4f20, 0xc0009e0c80, 0xc00415bc50, {0x70cc3cc, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:477 +0xf3 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:206 +0x113 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005f8340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:07:41.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-1766" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[91m�[1m• Failure [56.941 seconds]�[0m [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to convert a non homogeneous list of CRs [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:07:40.655: Unexpected error: <*errors.errorString | 0xc0002442b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:477 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":7,"skipped":124,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:07:41.241: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:07:41.925: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:07:44.945: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:07:44.949: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: Create a v2 custom resource �[1mSTEP�[0m: List CRs in v1 �[1mSTEP�[0m: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:07:48.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-5459" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":8,"skipped":124,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:07:18.543: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod busybox-b4a83b3c-cda8-402c-aed2-b79b24785ef1 in namespace container-probe-6569 Jan 11 15:07:20.597: INFO: Started pod busybox-b4a83b3c-cda8-402c-aed2-b79b24785ef1 in namespace container-probe-6569 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 11 15:07:20.601: INFO: Initial restart count of pod busybox-b4a83b3c-cda8-402c-aed2-b79b24785ef1 is 0 Jan 11 15:08:10.728: INFO: Restart count of pod container-probe-6569/busybox-b4a83b3c-cda8-402c-aed2-b79b24785ef1 is now 1 (50.12768353s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:10.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-6569" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":591,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:10.792: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-50e7d035-c458-4f2a-966c-5725c54c3fa8 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 15:08:10.825: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a47c6c67-fd4b-46b5-96a4-eca57becf46f" in namespace "projected-6174" to be "Succeeded or Failed" Jan 11 15:08:10.827: INFO: Pod "pod-projected-configmaps-a47c6c67-fd4b-46b5-96a4-eca57becf46f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.485717ms Jan 11 15:08:12.833: INFO: Pod "pod-projected-configmaps-a47c6c67-fd4b-46b5-96a4-eca57becf46f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007896365s Jan 11 15:08:14.837: INFO: Pod "pod-projected-configmaps-a47c6c67-fd4b-46b5-96a4-eca57becf46f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011766779s �[1mSTEP�[0m: Saw pod success Jan 11 15:08:14.837: INFO: Pod "pod-projected-configmaps-a47c6c67-fd4b-46b5-96a4-eca57becf46f" satisfied condition "Succeeded or Failed" Jan 11 15:08:14.840: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2 pod pod-projected-configmaps-a47c6c67-fd4b-46b5-96a4-eca57becf46f container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:08:14.856: INFO: Waiting for pod pod-projected-configmaps-a47c6c67-fd4b-46b5-96a4-eca57becf46f to disappear Jan 11 15:08:14.860: INFO: Pod pod-projected-configmaps-a47c6c67-fd4b-46b5-96a4-eca57becf46f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:14.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6174" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":625,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:14.900: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 11 15:08:18.968: INFO: Expected: &{} to match Container's Termination Message: -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:18.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-6692" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":643,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:19.019: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating an Endpoint �[1mSTEP�[0m: waiting for available Endpoint �[1mSTEP�[0m: listing all Endpoints �[1mSTEP�[0m: updating the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: patching the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: deleting the Endpoint by Collection �[1mSTEP�[0m: waiting for Endpoint deletion �[1mSTEP�[0m: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:19.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6448" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":26,"skipped":654,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:19.147: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-d9dd511d-bcd7-4c12-835c-a422a8cb6ff6 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 15:08:19.182: INFO: Waiting up to 5m0s for pod "pod-configmaps-43babc57-30f9-47b0-a329-4dedb9d1a399" in namespace "configmap-5326" to be "Succeeded or Failed" Jan 11 15:08:19.186: INFO: Pod "pod-configmaps-43babc57-30f9-47b0-a329-4dedb9d1a399": Phase="Pending", Reason="", readiness=false. Elapsed: 3.693829ms Jan 11 15:08:21.192: INFO: Pod "pod-configmaps-43babc57-30f9-47b0-a329-4dedb9d1a399": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009540381s Jan 11 15:08:23.197: INFO: Pod "pod-configmaps-43babc57-30f9-47b0-a329-4dedb9d1a399": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015036301s �[1mSTEP�[0m: Saw pod success Jan 11 15:08:23.198: INFO: Pod "pod-configmaps-43babc57-30f9-47b0-a329-4dedb9d1a399" satisfied condition "Succeeded or Failed" Jan 11 15:08:23.202: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod pod-configmaps-43babc57-30f9-47b0-a329-4dedb9d1a399 container configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:08:23.221: INFO: Waiting for pod pod-configmaps-43babc57-30f9-47b0-a329-4dedb9d1a399 to disappear Jan 11 15:08:23.225: INFO: Pod pod-configmaps-43babc57-30f9-47b0-a329-4dedb9d1a399 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:23.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5326" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":663,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:23.263: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:08:24.118: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:08:27.163: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating configmap webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:27.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8412" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8412-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":28,"skipped":677,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:27.619: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 11 15:08:27.689: INFO: Waiting up to 5m0s for pod "pod-292d88f2-e5f9-4a71-8268-394d4afbe722" in namespace "emptydir-8144" to be "Succeeded or Failed" Jan 11 15:08:27.708: INFO: Pod "pod-292d88f2-e5f9-4a71-8268-394d4afbe722": Phase="Pending", Reason="", readiness=false. Elapsed: 18.537217ms Jan 11 15:08:29.717: INFO: Pod "pod-292d88f2-e5f9-4a71-8268-394d4afbe722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027616888s Jan 11 15:08:31.724: INFO: Pod "pod-292d88f2-e5f9-4a71-8268-394d4afbe722": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034365102s Jan 11 15:08:33.730: INFO: Pod "pod-292d88f2-e5f9-4a71-8268-394d4afbe722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040581687s �[1mSTEP�[0m: Saw pod success Jan 11 15:08:33.730: INFO: Pod "pod-292d88f2-e5f9-4a71-8268-394d4afbe722" satisfied condition "Succeeded or Failed" Jan 11 15:08:33.736: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2 pod pod-292d88f2-e5f9-4a71-8268-394d4afbe722 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:08:33.759: INFO: Waiting for pod pod-292d88f2-e5f9-4a71-8268-394d4afbe722 to disappear Jan 11 15:08:33.764: INFO: Pod pod-292d88f2-e5f9-4a71-8268-394d4afbe722 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:33.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-8144" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":707,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:04:37.024: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe W0111 15:04:37.070639 18 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 11 15:04:37.070: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod busybox-e03a1edf-49eb-495b-87d3-6c71ed650466 in namespace container-probe-5911 Jan 11 15:04:41.138: INFO: Started pod busybox-e03a1edf-49eb-495b-87d3-6c71ed650466 in namespace container-probe-5911 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 11 15:04:41.143: INFO: Initial restart count of pod busybox-e03a1edf-49eb-495b-87d3-6c71ed650466 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:42.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-5911" for this suite. �[32m• [SLOW TEST:245.085 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":43,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:42.181: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:42.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-6104" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:33.816: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota with terminating scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not terminating scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a long running pod �[1mSTEP�[0m: Ensuring resource quota with not terminating scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with terminating scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a terminating pod �[1mSTEP�[0m: Ensuring resource quota with terminating scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not terminating scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:50.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-4848" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":30,"skipped":718,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":2,"skipped":62,"failed":0} [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:42.310: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-ddf1d906-64ce-47b5-b319-f304ed547728 Jan 11 15:08:42.366: INFO: Pod name my-hostname-basic-ddf1d906-64ce-47b5-b319-f304ed547728: Found 0 pods out of 1 Jan 11 15:08:47.383: INFO: Pod name my-hostname-basic-ddf1d906-64ce-47b5-b319-f304ed547728: Found 1 pods out of 1 Jan 11 15:08:47.383: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ddf1d906-64ce-47b5-b319-f304ed547728" are running Jan 11 15:08:47.393: INFO: Pod "my-hostname-basic-ddf1d906-64ce-47b5-b319-f304ed547728-7dqcp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 15:08:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 15:08:43 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 15:08:43 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 15:08:42 +0000 UTC Reason: Message:}]) Jan 11 15:08:47.393: INFO: Trying to dial the pod Jan 11 15:08:52.414: INFO: Controller my-hostname-basic-ddf1d906-64ce-47b5-b319-f304ed547728: Got expected result from replica 1 [my-hostname-basic-ddf1d906-64ce-47b5-b319-f304ed547728-7dqcp]: "my-hostname-basic-ddf1d906-64ce-47b5-b319-f304ed547728-7dqcp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:52.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-7124" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":3,"skipped":62,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:50.047: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 11 15:08:50.108: INFO: The status of Pod pod-update-6ea9303f-b5ba-4d2e-ad12-590ae31ebba4 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:08:52.116: INFO: The status of Pod pod-update-6ea9303f-b5ba-4d2e-ad12-590ae31ebba4 is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 11 15:08:52.644: INFO: Successfully updated pod "pod-update-6ea9303f-b5ba-4d2e-ad12-590ae31ebba4" �[1mSTEP�[0m: verifying the updated pod is in kubernetes Jan 11 15:08:52.656: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:52.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-3652" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":726,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:52.440: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 15:08:52.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-660d03a3-c131-475b-a072-87834a5c82aa" in namespace "projected-3686" to be "Succeeded or Failed" Jan 11 15:08:52.504: INFO: Pod "downwardapi-volume-660d03a3-c131-475b-a072-87834a5c82aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.306473ms Jan 11 15:08:54.510: INFO: Pod "downwardapi-volume-660d03a3-c131-475b-a072-87834a5c82aa": Phase="Running", Reason="", readiness=true. Elapsed: 2.012868772s Jan 11 15:08:56.519: INFO: Pod "downwardapi-volume-660d03a3-c131-475b-a072-87834a5c82aa": Phase="Running", Reason="", readiness=false. Elapsed: 4.021143135s Jan 11 15:08:58.529: INFO: Pod "downwardapi-volume-660d03a3-c131-475b-a072-87834a5c82aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031540869s �[1mSTEP�[0m: Saw pod success Jan 11 15:08:58.529: INFO: Pod "downwardapi-volume-660d03a3-c131-475b-a072-87834a5c82aa" satisfied condition "Succeeded or Failed" Jan 11 15:08:58.536: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod downwardapi-volume-660d03a3-c131-475b-a072-87834a5c82aa container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:08:58.598: INFO: Waiting for pod downwardapi-volume-660d03a3-c131-475b-a072-87834a5c82aa to disappear Jan 11 15:08:58.608: INFO: Pod downwardapi-volume-660d03a3-c131-475b-a072-87834a5c82aa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:58.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3686" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":64,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:52.781: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override all Jan 11 15:08:52.835: INFO: Waiting up to 5m0s for pod "client-containers-c4390082-6cb5-405b-8340-463eb4cafc48" in namespace "containers-246" to be "Succeeded or Failed" Jan 11 15:08:52.842: INFO: Pod "client-containers-c4390082-6cb5-405b-8340-463eb4cafc48": Phase="Pending", Reason="", readiness=false. Elapsed: 6.997678ms Jan 11 15:08:54.849: INFO: Pod "client-containers-c4390082-6cb5-405b-8340-463eb4cafc48": Phase="Running", Reason="", readiness=true. Elapsed: 2.01325113s Jan 11 15:08:56.855: INFO: Pod "client-containers-c4390082-6cb5-405b-8340-463eb4cafc48": Phase="Running", Reason="", readiness=false. Elapsed: 4.02001069s Jan 11 15:08:58.863: INFO: Pod "client-containers-c4390082-6cb5-405b-8340-463eb4cafc48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027802225s �[1mSTEP�[0m: Saw pod success Jan 11 15:08:58.863: INFO: Pod "client-containers-c4390082-6cb5-405b-8340-463eb4cafc48" satisfied condition "Succeeded or Failed" Jan 11 15:08:58.870: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2 pod client-containers-c4390082-6cb5-405b-8340-463eb4cafc48 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:08:58.921: INFO: Waiting for pod client-containers-c4390082-6cb5-405b-8340-463eb4cafc48 to disappear Jan 11 15:08:58.928: INFO: Pod client-containers-c4390082-6cb5-405b-8340-463eb4cafc48 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:08:58.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-246" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":765,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:59.188: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:08:59.937: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:09:02.975: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:09:02.982: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be denied by the webhook �[1mSTEP�[0m: Creating a custom resource whose deletion would be denied by the webhook �[1mSTEP�[0m: Updating the custom resource with disallowed data should be denied �[1mSTEP�[0m: Deleting the custom resource should be denied �[1mSTEP�[0m: Remove the offending key and value from the custom resource data �[1mSTEP�[0m: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:09:06.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-548" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-548-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":33,"skipped":824,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:09:06.460: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:09:06.593: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-cc1a34c2-8404-4e16-81e4-6b60f97ba6cf" in namespace "security-context-test-7576" to be "Succeeded or Failed" Jan 11 15:09:06.598: INFO: Pod "busybox-privileged-false-cc1a34c2-8404-4e16-81e4-6b60f97ba6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.468112ms Jan 11 15:09:08.605: INFO: Pod "busybox-privileged-false-cc1a34c2-8404-4e16-81e4-6b60f97ba6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012648351s Jan 11 15:09:10.613: INFO: Pod "busybox-privileged-false-cc1a34c2-8404-4e16-81e4-6b60f97ba6cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019879839s Jan 11 15:09:10.613: INFO: Pod "busybox-privileged-false-cc1a34c2-8404-4e16-81e4-6b60f97ba6cf" satisfied condition "Succeeded or Failed" Jan 11 15:09:10.635: INFO: Got logs for pod "busybox-privileged-false-cc1a34c2-8404-4e16-81e4-6b60f97ba6cf": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:09:10.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-7576" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":864,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:09:10.667: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Jan 11 15:09:10.706: INFO: namespace kubectl-8650 Jan 11 15:09:10.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8650 create -f -' Jan 11 15:09:12.550: INFO: stderr: "" Jan 11 15:09:12.550: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 11 15:09:13.557: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 15:09:13.557: INFO: Found 0 / 1 Jan 11 15:09:14.557: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 15:09:14.558: INFO: Found 1 / 1 Jan 11 15:09:14.558: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 11 15:09:14.564: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 15:09:14.565: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 11 15:09:14.565: INFO: wait on agnhost-primary startup in kubectl-8650 Jan 11 15:09:14.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8650 logs agnhost-primary-6588s agnhost-primary' Jan 11 15:09:14.718: INFO: stderr: "" Jan 11 15:09:14.719: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Jan 11 15:09:14.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8650 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 11 15:09:14.910: INFO: stderr: "" Jan 11 15:09:14.910: INFO: stdout: "service/rm2 exposed\n" Jan 11 15:09:14.939: INFO: Service rm2 in namespace kubectl-8650 found. �[1mSTEP�[0m: exposing service Jan 11 15:09:16.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8650 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 11 15:09:17.130: INFO: stderr: "" Jan 11 15:09:17.130: INFO: stdout: "service/rm3 exposed\n" Jan 11 15:09:17.154: INFO: Service rm3 in namespace kubectl-8650 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:09:19.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8650" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":35,"skipped":871,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:06:08.744: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod liveness-c25f4224-bcb0-494c-801b-48cc6539bd52 in namespace container-probe-555 Jan 11 15:06:10.790: INFO: Started pod liveness-c25f4224-bcb0-494c-801b-48cc6539bd52 in namespace container-probe-555 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 11 15:06:10.793: INFO: Initial restart count of pod liveness-c25f4224-bcb0-494c-801b-48cc6539bd52 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:10:11.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-555" for this suite. �[32m• [SLOW TEST:242.843 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":117,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:10:11.859: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:10:11.924: INFO: Got root ca configmap in namespace "svcaccounts-7844" Jan 11 15:10:11.941: INFO: Deleted root ca configmap in namespace "svcaccounts-7844" �[1mSTEP�[0m: waiting for a new root ca configmap created Jan 11 15:10:12.452: INFO: Recreated root ca configmap in namespace "svcaccounts-7844" Jan 11 15:10:12.464: INFO: Updated root ca configmap in namespace "svcaccounts-7844" �[1mSTEP�[0m: waiting for the root ca configmap reconciled Jan 11 15:10:12.972: INFO: Reconciled root ca configmap in namespace "svcaccounts-7844" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:10:12.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-7844" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":9,"skipped":189,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:09:19.302: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:09:19.346: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating first CR Jan 11 15:09:21.967: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T15:09:21Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T15:09:21Z]] name:name1 resourceVersion:6257 uid:83c168b6-3381-4043-b3a2-0926357c2444] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Creating second CR Jan 11 15:09:31.979: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T15:09:31Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T15:09:31Z]] name:name2 resourceVersion:6301 uid:bb711e6c-35a3-4a96-935c-35cc64367804] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Modifying first CR Jan 11 15:09:41.993: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T15:09:21Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T15:09:41Z]] name:name1 resourceVersion:6318 uid:83c168b6-3381-4043-b3a2-0926357c2444] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Modifying second CR Jan 11 15:09:52.004: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T15:09:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T15:09:51Z]] name:name2 resourceVersion:6335 uid:bb711e6c-35a3-4a96-935c-35cc64367804] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Deleting first CR Jan 11 15:10:02.015: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T15:09:21Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T15:09:41Z]] name:name1 resourceVersion:6354 uid:83c168b6-3381-4043-b3a2-0926357c2444] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Deleting second CR Jan 11 15:10:12.028: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T15:09:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T15:09:51Z]] name:name2 resourceVersion:6381 uid:bb711e6c-35a3-4a96-935c-35cc64367804] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:10:22.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-watch-8797" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":36,"skipped":909,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:10:13.109: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Pod that fits quota �[1mSTEP�[0m: Ensuring ResourceQuota status captures the pod usage �[1mSTEP�[0m: Not allowing a pod to be created that exceeds remaining quota �[1mSTEP�[0m: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) �[1mSTEP�[0m: Ensuring a pod cannot update its resource requirements �[1mSTEP�[0m: Ensuring attempts to update pod resource requirements did not change quota usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:10:26.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-7249" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":10,"skipped":218,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:10:22.643: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 �[1mSTEP�[0m: creating an pod Jan 11 15:10:22.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2588 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.39 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 11 15:10:22.852: INFO: stderr: "" Jan 11 15:10:22.852: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for log generator to start. Jan 11 15:10:22.852: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 11 15:10:22.852: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2588" to be "running and ready, or succeeded" Jan 11 15:10:22.860: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.651296ms Jan 11 15:10:24.866: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.013952831s Jan 11 15:10:24.866: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 11 15:10:24.867: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Jan 11 15:10:24.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2588 logs logs-generator logs-generator' Jan 11 15:10:25.053: INFO: stderr: "" Jan 11 15:10:25.053: INFO: stdout: "I0111 15:10:23.875360 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/2x9q 521\nI0111 15:10:24.075930 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/659 335\nI0111 15:10:24.276429 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/d2nd 249\nI0111 15:10:24.476054 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/fk5 487\nI0111 15:10:24.675384 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/kpdw 581\nI0111 15:10:24.876037 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/5mkp 493\n" �[1mSTEP�[0m: limiting log lines Jan 11 15:10:25.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2588 logs logs-generator logs-generator --tail=1' Jan 11 15:10:25.221: INFO: stderr: "" Jan 11 15:10:25.221: INFO: stdout: "I0111 15:10:25.075404 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/9mb 388\n" Jan 11 15:10:25.221: INFO: got output "I0111 15:10:25.075404 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/9mb 388\n" �[1mSTEP�[0m: limiting log bytes Jan 11 15:10:25.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2588 logs logs-generator logs-generator --limit-bytes=1' Jan 11 15:10:25.396: INFO: stderr: "" Jan 11 15:10:25.396: INFO: stdout: "I" Jan 11 15:10:25.396: INFO: got output "I" �[1mSTEP�[0m: exposing timestamps Jan 11 15:10:25.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2588 logs logs-generator logs-generator --tail=1 --timestamps' Jan 11 15:10:25.573: INFO: stderr: "" Jan 11 15:10:25.573: INFO: stdout: "2023-01-11T15:10:25.476100028Z I0111 15:10:25.475608 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/dlzh 533\n" Jan 11 15:10:25.573: INFO: got output "2023-01-11T15:10:25.476100028Z I0111 15:10:25.475608 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/dlzh 533\n" �[1mSTEP�[0m: restricting to a time range Jan 11 15:10:28.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2588 logs logs-generator logs-generator --since=1s' Jan 11 15:10:28.286: INFO: stderr: "" Jan 11 15:10:28.286: INFO: stdout: "I0111 15:10:27.275922 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/wsj 546\nI0111 15:10:27.476499 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/wx68 499\nI0111 15:10:27.676106 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/6r2 388\nI0111 15:10:27.875441 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/tw8 241\nI0111 15:10:28.076051 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/zfc 404\n" Jan 11 15:10:28.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2588 logs logs-generator logs-generator --since=24h' Jan 11 15:10:28.454: INFO: stderr: "" Jan 11 15:10:28.454: INFO: stdout: "I0111 15:10:23.875360 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/2x9q 521\nI0111 15:10:24.075930 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/659 335\nI0111 15:10:24.276429 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/d2nd 249\nI0111 15:10:24.476054 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/fk5 487\nI0111 15:10:24.675384 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/kpdw 581\nI0111 15:10:24.876037 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/5mkp 493\nI0111 15:10:25.075404 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/9mb 388\nI0111 15:10:25.275987 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/ms9n 459\nI0111 15:10:25.475608 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/dlzh 533\nI0111 15:10:25.676200 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/jrt 409\nI0111 15:10:25.875643 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/4bzq 354\nI0111 15:10:26.076083 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/q4z 367\nI0111 15:10:26.275674 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/fvb 471\nI0111 15:10:26.476248 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/cmj 241\nI0111 15:10:26.675761 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/jzw 561\nI0111 15:10:26.875998 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/br7l 335\nI0111 15:10:27.075369 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/kn5 244\nI0111 15:10:27.275922 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/wsj 546\nI0111 15:10:27.476499 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/wx68 499\nI0111 15:10:27.676106 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/6r2 388\nI0111 15:10:27.875441 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/tw8 241\nI0111 15:10:28.076051 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/zfc 404\nI0111 15:10:28.281612 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/vkpn 417\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 Jan 11 15:10:28.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2588 delete pod logs-generator' Jan 11 15:10:29.268: INFO: stderr: "" Jan 11 15:10:29.268: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:10:29.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2588" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":37,"skipped":929,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:10:29.323: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:10:30.002: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:10:33.049: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:10:33.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2728" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2728-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":38,"skipped":942,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:10:33.511: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 15:10:33.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-001660d6-9a49-4f2d-a845-9c0ade012f1d" in namespace "projected-4691" to be "Succeeded or Failed" Jan 11 15:10:33.578: INFO: Pod "downwardapi-volume-001660d6-9a49-4f2d-a845-9c0ade012f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.875571ms Jan 11 15:10:35.592: INFO: Pod "downwardapi-volume-001660d6-9a49-4f2d-a845-9c0ade012f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020150139s Jan 11 15:10:37.601: INFO: Pod "downwardapi-volume-001660d6-9a49-4f2d-a845-9c0ade012f1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029740401s �[1mSTEP�[0m: Saw pod success Jan 11 15:10:37.601: INFO: Pod "downwardapi-volume-001660d6-9a49-4f2d-a845-9c0ade012f1d" satisfied condition "Succeeded or Failed" Jan 11 15:10:37.609: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4 pod downwardapi-volume-001660d6-9a49-4f2d-a845-9c0ade012f1d container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:10:37.651: INFO: Waiting for pod downwardapi-volume-001660d6-9a49-4f2d-a845-9c0ade012f1d to disappear Jan 11 15:10:37.661: INFO: Pod downwardapi-volume-001660d6-9a49-4f2d-a845-9c0ade012f1d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:10:37.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4691" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":954,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:10:26.323: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Jan 11 15:10:26.357: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: rename a version �[1mSTEP�[0m: check the new version name is served �[1mSTEP�[0m: check the old version name is removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:10:56.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-74" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":11,"skipped":222,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:10:56.935: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating all guestbook components Jan 11 15:10:56.998: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 11 15:10:56.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 create -f -' Jan 11 15:10:58.712: INFO: stderr: "" Jan 11 15:10:58.712: INFO: stdout: "service/agnhost-replica created\n" Jan 11 15:10:58.712: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 11 15:10:58.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 create -f -' Jan 11 15:10:59.634: INFO: stderr: "" Jan 11 15:10:59.634: INFO: stdout: "service/agnhost-primary created\n" Jan 11 15:10:59.634: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 11 15:10:59.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 create -f -' Jan 11 15:11:00.070: INFO: stderr: "" Jan 11 15:11:00.070: INFO: stdout: "service/frontend created\n" Jan 11 15:11:00.070: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 11 15:11:00.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 create -f -' Jan 11 15:11:00.656: INFO: stderr: "" Jan 11 15:11:00.656: INFO: stdout: "deployment.apps/frontend created\n" Jan 11 15:11:00.656: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 11 15:11:00.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 create -f -' Jan 11 15:11:01.357: INFO: stderr: "" Jan 11 15:11:01.357: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 11 15:11:01.358: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 11 15:11:01.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 create -f -' Jan 11 15:11:02.226: INFO: stderr: "" Jan 11 15:11:02.227: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 11 15:11:02.227: INFO: Waiting for all frontend pods to be Running. Jan 11 15:11:07.281: INFO: Waiting for frontend to serve content. Jan 11 15:11:07.301: INFO: Trying to add a new entry to the guestbook. Jan 11 15:11:07.331: INFO: Verifying that added entry can be retrieved. �[1mSTEP�[0m: using delete to clean up resources Jan 11 15:11:07.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 delete --grace-period=0 --force -f -' Jan 11 15:11:07.614: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 15:11:07.614: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 11 15:11:07.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 delete --grace-period=0 --force -f -' Jan 11 15:11:07.968: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 15:11:07.969: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 11 15:11:07.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 delete --grace-period=0 --force -f -' Jan 11 15:11:08.252: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 15:11:08.252: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 11 15:11:08.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 delete --grace-period=0 --force -f -' Jan 11 15:11:08.424: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 15:11:08.424: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 11 15:11:08.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 delete --grace-period=0 --force -f -' Jan 11 15:11:08.694: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 15:11:08.695: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 11 15:11:08.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7746 delete --grace-period=0 --force -f -' Jan 11 15:11:08.920: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 15:11:08.920: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:08.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7746" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":12,"skipped":269,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:09.180: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-77cbe8ae-9b61-4eac-8728-222061972033 �[1mSTEP�[0m: Creating the pod Jan 11 15:11:09.397: INFO: The status of Pod pod-configmaps-fc5afff9-59fe-4cd2-a061-0bc4730d2980 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:11.405: INFO: The status of Pod pod-configmaps-fc5afff9-59fe-4cd2-a061-0bc4730d2980 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap configmap-test-upd-77cbe8ae-9b61-4eac-8728-222061972033 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:13.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1647" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":329,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:13.548: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 15:11:13.606: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4992aa83-440e-4b5b-b954-5b50588a522c" in namespace "downward-api-9016" to be "Succeeded or Failed" Jan 11 15:11:13.612: INFO: Pod "downwardapi-volume-4992aa83-440e-4b5b-b954-5b50588a522c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156578ms Jan 11 15:11:15.620: INFO: Pod "downwardapi-volume-4992aa83-440e-4b5b-b954-5b50588a522c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013894938s Jan 11 15:11:17.628: INFO: Pod "downwardapi-volume-4992aa83-440e-4b5b-b954-5b50588a522c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022445701s �[1mSTEP�[0m: Saw pod success Jan 11 15:11:17.628: INFO: Pod "downwardapi-volume-4992aa83-440e-4b5b-b954-5b50588a522c" satisfied condition "Succeeded or Failed" Jan 11 15:11:17.637: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2 pod downwardapi-volume-4992aa83-440e-4b5b-b954-5b50588a522c container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:11:17.683: INFO: Waiting for pod downwardapi-volume-4992aa83-440e-4b5b-b954-5b50588a522c to disappear Jan 11 15:11:17.689: INFO: Pod downwardapi-volume-4992aa83-440e-4b5b-b954-5b50588a522c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:17.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9016" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":342,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:10:37.692: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2734 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2734;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2734 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2734;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2734.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2734.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2734.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2734.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2734.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2734.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2734.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2734.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2734.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2734.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2734.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2734.svc;check="$$(dig +notcp +noall +answer +search 41.151.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.151.41_udp@PTR;check="$$(dig +tcp +noall +answer +search 41.151.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.151.41_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2734 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2734;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2734 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2734;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2734.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2734.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2734.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2734.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2734.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2734.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2734.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2734.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2734.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2734.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2734.svc;check="$$(dig +notcp +noall +answer +search 41.151.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.151.41_udp@PTR;check="$$(dig +tcp +noall +answer +search 41.151.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.151.41_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 11 15:10:51.930: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:51.964: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.017: INFO: Unable to read wheezy_udp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.051: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.079: INFO: Unable to read wheezy_udp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.146: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.181: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.204: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.346: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.366: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.382: INFO: Unable to read jessie_udp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.410: INFO: Unable to read jessie_tcp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.428: INFO: Unable to read jessie_udp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.443: INFO: Unable to read jessie_tcp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.461: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.473: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:52.529: INFO: Lookups using dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2734 wheezy_tcp@dns-test-service.dns-2734 wheezy_udp@dns-test-service.dns-2734.svc wheezy_tcp@dns-test-service.dns-2734.svc wheezy_udp@_http._tcp.dns-test-service.dns-2734.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2734.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2734 jessie_tcp@dns-test-service.dns-2734 jessie_udp@dns-test-service.dns-2734.svc jessie_tcp@dns-test-service.dns-2734.svc jessie_udp@_http._tcp.dns-test-service.dns-2734.svc jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc] Jan 11 15:10:57.551: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.560: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.592: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.605: INFO: Unable to read wheezy_udp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.616: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.628: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.638: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.689: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.697: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.706: INFO: Unable to read jessie_udp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.720: INFO: Unable to read jessie_tcp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.734: INFO: Unable to read jessie_udp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.747: INFO: Unable to read jessie_tcp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.756: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.765: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:10:57.807: INFO: Lookups using dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2734 wheezy_tcp@dns-test-service.dns-2734 wheezy_udp@dns-test-service.dns-2734.svc wheezy_tcp@dns-test-service.dns-2734.svc wheezy_udp@_http._tcp.dns-test-service.dns-2734.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2734.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2734 jessie_tcp@dns-test-service.dns-2734 jessie_udp@dns-test-service.dns-2734.svc jessie_tcp@dns-test-service.dns-2734.svc jessie_udp@_http._tcp.dns-test-service.dns-2734.svc jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc] Jan 11 15:11:02.563: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:02.615: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:02.644: INFO: Unable to read wheezy_udp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:02.664: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:02.721: INFO: Unable to read wheezy_udp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:02.743: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:02.771: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:02.806: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:02.935: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:02.978: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:02.997: INFO: Unable to read jessie_udp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:03.007: INFO: Unable to read jessie_tcp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:03.022: INFO: Unable to read jessie_udp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:03.063: INFO: Unable to read jessie_tcp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:03.109: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:03.138: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:03.343: INFO: Lookups using dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2734 wheezy_tcp@dns-test-service.dns-2734 wheezy_udp@dns-test-service.dns-2734.svc wheezy_tcp@dns-test-service.dns-2734.svc wheezy_udp@_http._tcp.dns-test-service.dns-2734.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2734.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2734 jessie_tcp@dns-test-service.dns-2734 jessie_udp@dns-test-service.dns-2734.svc jessie_tcp@dns-test-service.dns-2734.svc jessie_udp@_http._tcp.dns-test-service.dns-2734.svc jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc] Jan 11 15:11:07.540: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.555: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.564: INFO: Unable to read wheezy_udp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.582: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.590: INFO: Unable to read wheezy_udp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.613: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.623: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.633: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.750: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.770: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.785: INFO: Unable to read jessie_udp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.805: INFO: Unable to read jessie_tcp@dns-test-service.dns-2734 from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.815: INFO: Unable to read jessie_udp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.825: INFO: Unable to read jessie_tcp@dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.838: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.867: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:07.955: INFO: Lookups using dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2734 wheezy_tcp@dns-test-service.dns-2734 wheezy_udp@dns-test-service.dns-2734.svc wheezy_tcp@dns-test-service.dns-2734.svc wheezy_udp@_http._tcp.dns-test-service.dns-2734.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2734.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2734 jessie_tcp@dns-test-service.dns-2734 jessie_udp@dns-test-service.dns-2734.svc jessie_tcp@dns-test-service.dns-2734.svc jessie_udp@_http._tcp.dns-test-service.dns-2734.svc jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc] Jan 11 15:11:12.667: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc from pod dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9: the server could not find the requested resource (get pods dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9) Jan 11 15:11:12.694: INFO: Lookups using dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9 failed for: [jessie_tcp@_http._tcp.dns-test-service.dns-2734.svc] Jan 11 15:11:17.767: INFO: DNS probes using dns-2734/dns-test-7e568443-ae5a-4c75-bf0c-5199fa86a1f9 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:18.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-2734" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":40,"skipped":957,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:17.741: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: validating api versions Jan 11 15:11:17.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4295 api-versions' Jan 11 15:11:18.278: INFO: stderr: "" Jan 11 15:11:18.278: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:18.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4295" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":15,"skipped":350,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:18.288: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace �[1mSTEP�[0m: listing events with field selection filtering on source �[1mSTEP�[0m: listing events with field selection filtering on reportingController �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: updating the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:18.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-4575" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":41,"skipped":985,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:18.801: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: validating cluster-info Jan 11 15:11:18.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1760 cluster-info' Jan 11 15:11:19.163: INFO: stderr: "" Jan 11 15:11:19.163: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.18.0.3:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:19.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1760" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":42,"skipped":1018,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:19.253: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:11:19.796: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:11:22.827: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:11:22.833: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-9565-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource while v1 is storage version �[1mSTEP�[0m: Patching Custom Resource Definition to set v2 as storage �[1mSTEP�[0m: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:26.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7127" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7127-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":43,"skipped":1029,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:18.595: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota with best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a best-effort pod �[1mSTEP�[0m: Ensuring resource quota with best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not best effort ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a not best-effort pod �[1mSTEP�[0m: Ensuring resource quota with not best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with best effort scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:34.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-7548" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":16,"skipped":404,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:35.106: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: Orphaning one of the Job's Pods Jan 11 15:11:37.799: INFO: Successfully updated pod "adopt-release-5snh5" �[1mSTEP�[0m: Checking that the Job readopts the Pod Jan 11 15:11:37.799: INFO: Waiting up to 15m0s for pod "adopt-release-5snh5" in namespace "job-989" to be "adopted" Jan 11 15:11:37.806: INFO: Pod "adopt-release-5snh5": Phase="Running", Reason="", readiness=true. Elapsed: 7.644515ms Jan 11 15:11:39.816: INFO: Pod "adopt-release-5snh5": Phase="Running", Reason="", readiness=true. Elapsed: 2.017075699s Jan 11 15:11:39.816: INFO: Pod "adopt-release-5snh5" satisfied condition "adopted" �[1mSTEP�[0m: Removing the labels from the Job's Pod Jan 11 15:11:40.332: INFO: Successfully updated pod "adopt-release-5snh5" �[1mSTEP�[0m: Checking that the Job releases the Pod Jan 11 15:11:40.332: INFO: Waiting up to 15m0s for pod "adopt-release-5snh5" in namespace "job-989" to be "released" Jan 11 15:11:40.338: INFO: Pod "adopt-release-5snh5": Phase="Running", Reason="", readiness=true. Elapsed: 5.779338ms Jan 11 15:11:42.345: INFO: Pod "adopt-release-5snh5": Phase="Running", Reason="", readiness=true. Elapsed: 2.012656467s Jan 11 15:11:42.345: INFO: Pod "adopt-release-5snh5" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:42.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-989" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":17,"skipped":457,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:26.427: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-1818 �[1mSTEP�[0m: changing the ExternalName service to type=ClusterIP �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-1818 I0111 15:11:26.649827 19 runners.go:193] Created replication controller with name: externalname-service, namespace: services-1818, replica count: 2 I0111 15:11:29.701227 19 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 15:11:29.701: INFO: Creating new exec pod Jan 11 15:11:32.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1818 exec execpodjp248 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 11 15:11:35.084: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 11 15:11:35.084: INFO: stdout: "" Jan 11 15:11:36.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1818 exec execpodjp248 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 11 15:11:36.476: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 11 15:11:36.476: INFO: stdout: "externalname-service-sh6ql" Jan 11 15:11:36.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1818 exec execpodjp248 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.249.120 80' Jan 11 15:11:38.781: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.249.120 80\nConnection to 10.140.249.120 80 port [tcp/http] succeeded!\n" Jan 11 15:11:38.781: INFO: stdout: "" Jan 11 15:11:39.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1818 exec execpodjp248 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.249.120 80' Jan 11 15:11:42.155: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.249.120 80\nConnection to 10.140.249.120 80 port [tcp/http] succeeded!\n" Jan 11 15:11:42.155: INFO: stdout: "" Jan 11 15:11:42.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1818 exec execpodjp248 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.249.120 80' Jan 11 15:11:43.105: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.249.120 80\nConnection to 10.140.249.120 80 port [tcp/http] succeeded!\n" Jan 11 15:11:43.105: INFO: stdout: "externalname-service-sh6ql" Jan 11 15:11:43.105: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:43.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1818" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":44,"skipped":1053,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:42.402: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename hostport �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Jan 11 15:11:42.489: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:44.497: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.5 on the node which pod1 resides and expect scheduled Jan 11 15:11:44.514: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:46.523: INFO: The status of Pod pod2 is Running (Ready = false) Jan 11 15:11:48.524: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.5 but use UDP protocol on the node which pod2 resides Jan 11 15:11:48.553: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:50.561: INFO: The status of Pod pod3 is Running (Ready = true) Jan 11 15:11:50.578: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:52.584: INFO: The status of Pod e2e-host-exec is Running (Ready = true) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Jan 11 15:11:52.589: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.5 http://127.0.0.1:54323/hostname] Namespace:hostport-8624 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:11:52.589: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:11:52.592: INFO: ExecWithOptions: Clientset creation Jan 11 15:11:52.592: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-8624/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.18.0.5+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.5, port: 54323 Jan 11 15:11:52.726: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.5:54323/hostname] Namespace:hostport-8624 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:11:52.726: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:11:52.727: INFO: ExecWithOptions: Clientset creation Jan 11 15:11:52.728: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-8624/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F172.18.0.5%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.5, port: 54323 UDP Jan 11 15:11:52.881: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.5 54323] Namespace:hostport-8624 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:11:52.881: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:11:52.882: INFO: ExecWithOptions: Clientset creation Jan 11 15:11:52.883: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-8624/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+172.18.0.5+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:58.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "hostport-8624" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":468,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:58.128: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting the auto-created API token Jan 11 15:11:58.716: INFO: created pod pod-service-account-defaultsa Jan 11 15:11:58.716: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 11 15:11:58.723: INFO: created pod pod-service-account-mountsa Jan 11 15:11:58.723: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 11 15:11:58.732: INFO: created pod pod-service-account-nomountsa Jan 11 15:11:58.732: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 11 15:11:58.745: INFO: created pod pod-service-account-defaultsa-mountspec Jan 11 15:11:58.745: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 11 15:11:58.756: INFO: created pod pod-service-account-mountsa-mountspec Jan 11 15:11:58.756: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 11 15:11:58.773: INFO: created pod pod-service-account-nomountsa-mountspec Jan 11 15:11:58.773: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 11 15:11:58.797: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 11 15:11:58.798: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 11 15:11:58.817: INFO: created pod pod-service-account-mountsa-nomountspec Jan 11 15:11:58.817: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 11 15:11:58.855: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 11 15:11:58.855: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:11:58.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-9088" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":19,"skipped":498,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:59.002: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 11 15:11:59.051: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:12:02.414: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:12:17.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7508" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":20,"skipped":518,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:12:17.585: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating pod Jan 11 15:12:17.659: INFO: The status of Pod pod-hostip-4654465a-02cb-4573-830e-7f3e1edfcc89 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:19.665: INFO: The status of Pod pod-hostip-4654465a-02cb-4573-830e-7f3e1edfcc89 is Running (Ready = true) Jan 11 15:12:19.674: INFO: Pod pod-hostip-4654465a-02cb-4573-830e-7f3e1edfcc89 has hostIP: 172.18.0.7 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:12:19.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4855" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":554,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:12:19.728: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-tb6b �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 11 15:12:19.791: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tb6b" in namespace "subpath-4303" to be "Succeeded or Failed" Jan 11 15:12:19.796: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.916856ms Jan 11 15:12:21.803: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=true. Elapsed: 2.012416756s Jan 11 15:12:23.813: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=true. Elapsed: 4.021742428s Jan 11 15:12:25.819: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=true. Elapsed: 6.027827766s Jan 11 15:12:27.827: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=true. Elapsed: 8.035739566s Jan 11 15:12:29.834: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=true. Elapsed: 10.042692148s Jan 11 15:12:31.842: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=true. Elapsed: 12.050682678s Jan 11 15:12:33.849: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=true. Elapsed: 14.05834398s Jan 11 15:12:35.858: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=true. Elapsed: 16.066948831s Jan 11 15:12:37.865: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=true. Elapsed: 18.074315279s Jan 11 15:12:39.874: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=true. Elapsed: 20.082890601s Jan 11 15:12:41.884: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Running", Reason="", readiness=false. Elapsed: 22.093113295s Jan 11 15:12:43.892: INFO: Pod "pod-subpath-test-configmap-tb6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.101385587s �[1mSTEP�[0m: Saw pod success Jan 11 15:12:43.892: INFO: Pod "pod-subpath-test-configmap-tb6b" satisfied condition "Succeeded or Failed" Jan 11 15:12:43.899: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4 pod pod-subpath-test-configmap-tb6b container test-container-subpath-configmap-tb6b: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:12:43.938: INFO: Waiting for pod pod-subpath-test-configmap-tb6b to disappear Jan 11 15:12:43.944: INFO: Pod pod-subpath-test-configmap-tb6b no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-tb6b Jan 11 15:12:43.944: INFO: Deleting pod "pod-subpath-test-configmap-tb6b" in namespace "subpath-4303" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:12:43.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4303" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":22,"skipped":565,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:12:44.013: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-a2a35997-b061-41a6-909c-5f2fd43e98fc �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 15:12:44.087: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9cd2af3-667d-42f4-8717-ee5b6f0a1a36" in namespace "projected-7518" to be "Succeeded or Failed" Jan 11 15:12:44.094: INFO: Pod "pod-projected-secrets-f9cd2af3-667d-42f4-8717-ee5b6f0a1a36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.509455ms Jan 11 15:12:46.102: INFO: Pod "pod-projected-secrets-f9cd2af3-667d-42f4-8717-ee5b6f0a1a36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014375312s Jan 11 15:12:48.111: INFO: Pod "pod-projected-secrets-f9cd2af3-667d-42f4-8717-ee5b6f0a1a36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02285066s �[1mSTEP�[0m: Saw pod success Jan 11 15:12:48.111: INFO: Pod "pod-projected-secrets-f9cd2af3-667d-42f4-8717-ee5b6f0a1a36" satisfied condition "Succeeded or Failed" Jan 11 15:12:48.116: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4 pod pod-projected-secrets-f9cd2af3-667d-42f4-8717-ee5b6f0a1a36 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:12:48.155: INFO: Waiting for pod pod-projected-secrets-f9cd2af3-667d-42f4-8717-ee5b6f0a1a36 to disappear Jan 11 15:12:48.160: INFO: Pod pod-projected-secrets-f9cd2af3-667d-42f4-8717-ee5b6f0a1a36 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:12:48.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7518" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":576,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:07:48.306: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 11 15:11:30.311: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-9654/dns-test-c3c8398e-ba6b-42e1-8811-1575f9ec9e1f: the server is currently unable to handle the request (get pods dns-test-c3c8398e-ba6b-42e1-8811-1575f9ec9e1f) Jan 11 15:12:56.387: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-9654/dns-test-c3c8398e-ba6b-42e1-8811-1575f9ec9e1f: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-9654/pods/dns-test-c3c8398e-ba6b-42e1-8811-1575f9ec9e1f/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7fa9e02fe230, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d5088, 0xc000138000}, 0xc004493b58) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d5088, 0xc000138000}, 0x98, 0x2cc8045, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d5088, 0xc000138000}, 0x4a, 0xc004493be8, 0x2441ec7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78b5a40, 0xc00016e800, 0xc004493c30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00346b800, 0x4, 0x4}, {0x70d4db2, 0x7}, 0xc0035ab400, {0x7b06bd0, 0xc003762480}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc001498160, 0xc0035ab400, {0xc00346b800, 0x4, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x43e k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005f8340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a E0111 15:12:56.388132 20 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 11 15:12:56.387: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-9654/dns-test-c3c8398e-ba6b-42e1-8811-1575f9ec9e1f: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-9654/pods/dns-test-c3c8398e-ba6b-42e1-8811-1575f9ec9e1f/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:220, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7fa9e02fe230, 0x0})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d5088, 0xc000138000}, 0xc004493b58)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d5088, 0xc000138000}, 0x98, 0x2cc8045, 0x68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d5088, 0xc000138000}, 0x4a, 0xc004493be8, 0x2441ec7)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78b5a40, 0xc00016e800, 0xc004493c30)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50\nk8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00346b800, 0x4, 0x4}, {0x70d4db2, 0x7}, 0xc0035ab400, {0x7b06bd0, 0xc003762480}, 0x0, {0x0, ...})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc001498160, 0xc0035ab400, {0xc00346b800, 0x4, 0x4})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x43e\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697\nk8s.io/kubernetes/test/e2e.TestE2E(0x0)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc0005f8340, 0x735e880)\n\t/usr/local/go/src/testing/testing.go:1259 +0x102\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1306 +0x35a"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 152 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6c3d000, 0xc002150500}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000118280}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6c3d000, 0xc002150500}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x73 panic({0x62d5960, 0x78abec0}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc00073ab00, 0x159}, {0xc0044935f0, 0x0, 0x40}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00073ab00, 0x159}, {0xc0044936d0, 0x70cbf6a, 0xc0044936f8}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7 k8s.io/kubernetes/test/e2e/framework.Failf({0x717cde2, 0x2d}, {0xc004493940, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x131 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x889 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7fa9e02fe230, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d5088, 0xc000138000}, 0xc004493b58) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d5088, 0xc000138000}, 0x98, 0x2cc8045, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d5088, 0xc000138000}, 0x4a, 0xc004493be8, 0x2441ec7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78b5a40, 0xc00016e800, 0xc004493c30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00346b800, 0x4, 0x4}, {0x70d4db2, 0x7}, 0xc0035ab400, {0x7b06bd0, 0xc003762480}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc001498160, 0xc0035ab400, {0xc00346b800, 0x4, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x43e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0005f84e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xba k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0044955c8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0037cec30, 0xc004495990, {0x78b5a40, 0xc00016e800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0037cec30, {0x78b5a40, 0xc00016e800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xe7 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00420e000, 0xc0037cec30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xe5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00420e000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00420e000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00017e070, {0x7fa9e0580d38, 0xc0005f8340}, {0x710baa9, 0x40}, {0xc000e805d0, 0x3, 0x3}, {0x7a2d318, 0xc00016e800}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4d2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x78bc3a0, 0xc0005f8340}, {0x710baa9, 0x14}, {0xc00005c200, 0x3, 0x6}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x185 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x78bc3a0, 0xc0005f8340}, {0x710baa9, 0x14}, {0xc000ea25c0, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xf9 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005f8340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:12:56.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-9654" for this suite. �[91m�[1m• Failure [308.107 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould provide DNS for the cluster [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:12:56.387: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-9654/dns-test-c3c8398e-ba6b-42e1-8811-1575f9ec9e1f: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-9654/pods/dns-test-c3c8398e-ba6b-42e1-8811-1575f9ec9e1f/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":8,"skipped":160,"failed":3,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:12:56.417: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 11 15:12:58.495: INFO: DNS probes using dns-3007/dns-test-68c389a1-ddf7-4813-b025-4ecd360cfcf8 succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:12:58.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-3007" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":9,"skipped":160,"failed":3,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:12:48.206: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status captures replicaset creation �[1mSTEP�[0m: Deleting a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:12:59.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-7783" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":24,"skipped":582,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:12:59.467: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 15:12:59.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d5822db-52d9-4f75-91c7-8e300554d17a" in namespace "downward-api-6473" to be "Succeeded or Failed" Jan 11 15:12:59.511: INFO: Pod "downwardapi-volume-3d5822db-52d9-4f75-91c7-8e300554d17a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11375ms Jan 11 15:13:01.517: INFO: Pod "downwardapi-volume-3d5822db-52d9-4f75-91c7-8e300554d17a": Phase="Running", Reason="", readiness=false. Elapsed: 2.010445979s Jan 11 15:13:03.523: INFO: Pod "downwardapi-volume-3d5822db-52d9-4f75-91c7-8e300554d17a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015665707s �[1mSTEP�[0m: Saw pod success Jan 11 15:13:03.523: INFO: Pod "downwardapi-volume-3d5822db-52d9-4f75-91c7-8e300554d17a" satisfied condition "Succeeded or Failed" Jan 11 15:13:03.527: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4 pod downwardapi-volume-3d5822db-52d9-4f75-91c7-8e300554d17a container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:13:03.547: INFO: Waiting for pod downwardapi-volume-3d5822db-52d9-4f75-91c7-8e300554d17a to disappear Jan 11 15:13:03.550: INFO: Pod downwardapi-volume-3d5822db-52d9-4f75-91c7-8e300554d17a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:13:03.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6473" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":666,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:08:58.763: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 11 15:08:58.864: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:00.873: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 11 15:09:00.903: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:02.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:04.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:06.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:08.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:10.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:12.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:14.920: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:16.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:18.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:20.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:22.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:24.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:26.913: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:28.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:30.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:32.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:34.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:36.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:38.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:40.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:42.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:44.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:46.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:48.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:50.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:52.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:54.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:56.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:09:58.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:00.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:02.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:04.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:06.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:08.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:10.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:12.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:14.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:16.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:18.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:20.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:22.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:24.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:26.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:28.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:30.913: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:32.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:34.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:36.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:38.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:40.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:42.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:44.908: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:46.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:48.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:50.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:52.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:54.914: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:56.913: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:10:58.917: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:00.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:02.936: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:04.914: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:06.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:08.916: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:10.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:12.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:14.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:16.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:18.926: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:20.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:22.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:24.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:26.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:28.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:30.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:32.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:34.924: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:36.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:38.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:40.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:42.913: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:44.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:46.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:48.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:50.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:52.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:54.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:56.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:11:58.916: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:00.913: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:02.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:04.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:06.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:08.908: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:10.918: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:12.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:14.908: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:16.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:18.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:20.912: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:22.909: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:24.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:26.910: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:28.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:30.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:32.908: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:34.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:36.911: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:12:38.911: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:12:40.910: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:12:42.913: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:12:44.910: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:12:46.911: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:12:48.910: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:12:50.912: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:12:52.911: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:12:54.909: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:12:56.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:12:58.910: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:00.910: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:02.912: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:04.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:06.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:08.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:10.909: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:12.909: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:14.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:16.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:18.909: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:20.910: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:22.909: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:24.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:26.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:28.907: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:30.909: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:32.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:34.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:36.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:38.909: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:40.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:42.910: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:44.909: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:46.910: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:48.910: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:50.910: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:52.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:54.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:56.908: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:13:58.907: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:14:00.909: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:14:00.913: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:14:00.914: FAIL: Unexpected error: <*errors.errorString | 0xc0002bc2b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc004256e40, 0x7fdc8b309d28) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.2(0xc004142400) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:72 +0x73 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:105 +0x32b k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000186d00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:14:00.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-5738" for this suite. �[91m�[1m• Failure [302.161 seconds]�[0m [sig-node] Container Lifecycle Hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m when create a pod with lifecycle hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44�[0m �[91m�[1mshould execute poststart exec hook properly [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:14:00.914: Unexpected error: <*errors.errorString | 0xc0002bc2b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:11:43.255: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod liveness-288bd05d-b235-405d-b31b-960f6029136e in namespace container-probe-3645 Jan 11 15:11:45.394: INFO: Started pod liveness-288bd05d-b235-405d-b31b-960f6029136e in namespace container-probe-3645 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 11 15:11:45.400: INFO: Initial restart count of pod liveness-288bd05d-b235-405d-b31b-960f6029136e is 0 Jan 11 15:12:05.511: INFO: Restart count of pod container-probe-3645/liveness-288bd05d-b235-405d-b31b-960f6029136e is now 1 (20.110756698s elapsed) Jan 11 15:12:25.945: INFO: Restart count of pod container-probe-3645/liveness-288bd05d-b235-405d-b31b-960f6029136e is now 2 (40.544738276s elapsed) Jan 11 15:12:46.008: INFO: Restart count of pod container-probe-3645/liveness-288bd05d-b235-405d-b31b-960f6029136e is now 3 (1m0.608348598s elapsed) Jan 11 15:13:06.068: INFO: Restart count of pod container-probe-3645/liveness-288bd05d-b235-405d-b31b-960f6029136e is now 4 (1m20.667617205s elapsed) Jan 11 15:14:08.222: INFO: Restart count of pod container-probe-3645/liveness-288bd05d-b235-405d-b31b-960f6029136e is now 5 (2m22.821600027s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:14:08.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3645" for this suite. �[32m• [SLOW TEST:144.990 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should have monotonically increasing restart count [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":1063,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:14:08.251: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Jan 11 15:14:08.340: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Jan 11 15:14:08.340: INFO: cleanMinorVersion: 23 Jan 11 15:14:08.340: INFO: Minor version: 23 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:14:08.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-8863" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":46,"skipped":1065,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:14:08.369: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Failed �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 11 15:14:12.423: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:14:12.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-365" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":1073,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:14:12.472: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 11 15:14:12.516: INFO: Waiting up to 5m0s for pod "downward-api-49efb7c9-1184-4729-b929-805299c95212" in namespace "downward-api-7435" to be "Succeeded or Failed" Jan 11 15:14:12.522: INFO: Pod "downward-api-49efb7c9-1184-4729-b929-805299c95212": Phase="Pending", Reason="", readiness=false. Elapsed: 5.824112ms Jan 11 15:14:14.527: INFO: Pod "downward-api-49efb7c9-1184-4729-b929-805299c95212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010714759s Jan 11 15:14:16.532: INFO: Pod "downward-api-49efb7c9-1184-4729-b929-805299c95212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016594503s �[1mSTEP�[0m: Saw pod success Jan 11 15:14:16.533: INFO: Pod "downward-api-49efb7c9-1184-4729-b929-805299c95212" satisfied condition "Succeeded or Failed" Jan 11 15:14:16.536: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod downward-api-49efb7c9-1184-4729-b929-805299c95212 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:14:16.567: INFO: Waiting for pod downward-api-49efb7c9-1184-4729-b929-805299c95212 to disappear Jan 11 15:14:16.570: INFO: Pod downward-api-49efb7c9-1184-4729-b929-805299c95212 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:14:16.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-7435" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":1085,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:13:03.602: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-5554 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a new StatefulSet Jan 11 15:13:03.644: INFO: Found 0 stateful pods, waiting for 3 Jan 11 15:13:13.649: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 15:13:13.649: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 15:13:13.649: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 11 15:13:13.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5554 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 15:13:13.837: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 15:13:13.837: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 15:13:13.837: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' �[1mSTEP�[0m: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Jan 11 15:13:23.873: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Updating Pods in reverse ordinal order Jan 11 15:13:33.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5554 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 15:13:34.046: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 15:13:34.046: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 15:13:34.046: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' �[1mSTEP�[0m: Rolling back to a previous revision Jan 11 15:13:54.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5554 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 15:13:54.241: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 15:13:54.241: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 15:13:54.241: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 15:14:04.279: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Rolling back update in reverse ordinal order Jan 11 15:14:14.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5554 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 15:14:14.467: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 15:14:14.467: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 15:14:14.467: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 11 15:14:24.491: INFO: Deleting all statefulset in ns statefulset-5554 Jan 11 15:14:24.495: INFO: Scaling statefulset ss2 to 0 Jan 11 15:14:34.511: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 15:14:34.514: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:14:34.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-5554" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":26,"skipped":687,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:14:16.712: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:14:17.070: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:14:20.092: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API Jan 11 15:14:30.113: INFO: Waiting for webhook configuration to be ready... Jan 11 15:14:40.226: INFO: Waiting for webhook configuration to be ready... Jan 11 15:14:50.328: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:00.426: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:10.438: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:10.438: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForAttachingPod(0xc0002e0160, {0xc003f51870, 0xc}, 0xc004ed73b0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 +0x74a k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:207 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000da6340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:15:10.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8382" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8382-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [53.823 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny attaching pod [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:15:10.438: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:12:58.538: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-2253 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-2253 to expose endpoints map[] Jan 11 15:12:58.612: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Jan 11 15:12:59.623: INFO: successfully validated that service multi-endpoint-test in namespace services-2253 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-2253 Jan 11 15:12:59.636: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:13:01.640: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-2253 to expose endpoints map[pod1:[100]] Jan 11 15:13:01.655: INFO: successfully validated that service multi-endpoint-test in namespace services-2253 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-2253 Jan 11 15:13:01.665: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:13:03.675: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-2253 to expose endpoints map[pod1:[100] pod2:[101]] Jan 11 15:13:03.698: INFO: successfully validated that service multi-endpoint-test in namespace services-2253 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Jan 11 15:13:03.698: INFO: Creating new exec pod Jan 11 15:13:06.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 11 15:13:06.897: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 11 15:13:06.897: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 11 15:13:06.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.208.235 80' Jan 11 15:13:07.084: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.129.208.235 80\nConnection to 10.129.208.235 80 port [tcp/http] succeeded!\n" Jan 11 15:13:07.084: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 11 15:13:07.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:09.248: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:09.248: INFO: stdout: "" Jan 11 15:13:10.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:12.405: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:12.405: INFO: stdout: "" Jan 11 15:13:13.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:15.408: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:15.408: INFO: stdout: "" Jan 11 15:13:16.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:18.416: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:18.416: INFO: stdout: "" Jan 11 15:13:19.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:21.455: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:21.455: INFO: stdout: "" Jan 11 15:13:22.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:24.425: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:24.426: INFO: stdout: "" Jan 11 15:13:25.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:27.416: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:27.416: INFO: stdout: "" Jan 11 15:13:28.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:30.414: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:30.414: INFO: stdout: "" Jan 11 15:13:31.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:33.404: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:33.404: INFO: stdout: "" Jan 11 15:13:34.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:36.418: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:36.418: INFO: stdout: "" Jan 11 15:13:37.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:39.474: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:39.474: INFO: stdout: "" Jan 11 15:13:40.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:42.407: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:42.407: INFO: stdout: "" Jan 11 15:13:43.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:45.440: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:45.440: INFO: stdout: "" Jan 11 15:13:46.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:48.433: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:48.433: INFO: stdout: "" Jan 11 15:13:49.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:51.449: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:51.449: INFO: stdout: "" Jan 11 15:13:52.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:54.422: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:54.422: INFO: stdout: "" Jan 11 15:13:55.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:13:57.411: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:13:57.411: INFO: stdout: "" Jan 11 15:13:58.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:00.405: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:00.405: INFO: stdout: "" Jan 11 15:14:01.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:03.435: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:03.435: INFO: stdout: "" Jan 11 15:14:04.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:06.410: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:06.410: INFO: stdout: "" Jan 11 15:14:07.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:09.424: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:09.424: INFO: stdout: "" Jan 11 15:14:10.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:12.422: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:12.423: INFO: stdout: "" Jan 11 15:14:13.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:15.449: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:15.449: INFO: stdout: "" Jan 11 15:14:16.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:18.405: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:18.405: INFO: stdout: "" Jan 11 15:14:19.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:21.401: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:21.401: INFO: stdout: "" Jan 11 15:14:22.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:24.430: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:24.430: INFO: stdout: "" Jan 11 15:14:25.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:27.408: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:27.408: INFO: stdout: "" Jan 11 15:14:28.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:30.413: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:30.414: INFO: stdout: "" Jan 11 15:14:31.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:33.391: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:33.391: INFO: stdout: "" Jan 11 15:14:34.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:36.413: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:36.413: INFO: stdout: "" Jan 11 15:14:37.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:39.420: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:39.420: INFO: stdout: "" Jan 11 15:14:40.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:42.417: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:42.417: INFO: stdout: "" Jan 11 15:14:43.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:45.405: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:45.405: INFO: stdout: "" Jan 11 15:14:46.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:48.455: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:48.455: INFO: stdout: "" Jan 11 15:14:49.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:51.421: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:51.421: INFO: stdout: "" Jan 11 15:14:52.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:54.416: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:54.416: INFO: stdout: "" Jan 11 15:14:55.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:14:57.422: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:14:57.422: INFO: stdout: "" Jan 11 15:14:58.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:00.399: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:00.399: INFO: stdout: "" Jan 11 15:15:01.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:03.424: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:03.424: INFO: stdout: "" Jan 11 15:15:04.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:06.419: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:06.419: INFO: stdout: "" Jan 11 15:15:07.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:09.428: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:09.428: INFO: stdout: "" Jan 11 15:15:09.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2253 exec execpodrfpxr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:11.585: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:11.585: INFO: stdout: "" Jan 11 15:15:11.586: FAIL: Unexpected error: <*errors.errorString | 0xc00245e360>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 +0x7c6 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005f8340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:15:11.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2253" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [133.174 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould serve multiport endpoints from pods [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:15:11.586: Unexpected error: <*errors.errorString | 0xc00245e360>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:14:34.564: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Jan 11 15:14:34.898: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:14:34.917: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:14:37.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the crd webhook via the AdmissionRegistration API Jan 11 15:14:47.964: INFO: Waiting for webhook configuration to be ready... Jan 11 15:14:58.080: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:08.178: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:18.277: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:28.288: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:28.289: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerValidatingWebhookForCRD(0xc0005a0580, {0xc003cf52a0, 0xc}, 0xc0006230e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2037 +0x74a k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.12() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:306 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000604680, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:15:28.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4931" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4931-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [53.793 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould deny crd creation [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:15:28.289: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2037 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":48,"skipped":1162,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:15:10.541: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:15:10.961: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:15:13.993: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API Jan 11 15:15:24.012: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:34.124: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:44.226: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:54.324: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:04.334: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:04.334: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForAttachingPod(0xc0002e0160, {0xc0033c4060, 0xc}, 0xc002ee3e00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 +0x74a k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:207 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000da6340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:04.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3215" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3215-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [53.888 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny attaching pod [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:16:04.334: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":26,"skipped":690,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:15:28.360: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:15:29.274: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:15:32.305: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the crd webhook via the AdmissionRegistration API Jan 11 15:15:42.325: INFO: Waiting for webhook configuration to be ready... Jan 11 15:15:52.436: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:02.541: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:12.638: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:22.649: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:22.649: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerValidatingWebhookForCRD(0xc0005a0580, {0xc003ba70a0, 0xc}, 0xc003b0af00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2037 +0x74a k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.12() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:306 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000604680, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:22.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1095" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1095-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.366 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould deny crd creation [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:16:22.649: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2037 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":26,"skipped":690,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:22.729: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:16:23.314: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:16:26.341: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the crd webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource definition that should be denied by the webhook Jan 11 15:16:26.359: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:26.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2360" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2360-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":27,"skipped":690,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:26.585: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:16:26.653: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8170d246-55fa-448d-93e9-96db46b3971a", Controller:(*bool)(0xc0050aa47a), BlockOwnerDeletion:(*bool)(0xc0050aa47b)}} Jan 11 15:16:26.659: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b5f33523-e861-4366-8c63-46947def7ad5", Controller:(*bool)(0xc0050aa716), BlockOwnerDeletion:(*bool)(0xc0050aa717)}} Jan 11 15:16:26.666: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4a822a24-f853-4b94-a4c1-ee1c541b8df5", Controller:(*bool)(0xc0050aa986), BlockOwnerDeletion:(*bool)(0xc0050aa987)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:31.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-5806" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":28,"skipped":720,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:31.724: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override arguments Jan 11 15:16:31.759: INFO: Waiting up to 5m0s for pod "client-containers-e061fd46-17c7-4bcd-a467-b9cb8f7a34ed" in namespace "containers-7731" to be "Succeeded or Failed" Jan 11 15:16:31.762: INFO: Pod "client-containers-e061fd46-17c7-4bcd-a467-b9cb8f7a34ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657981ms Jan 11 15:16:33.775: INFO: Pod "client-containers-e061fd46-17c7-4bcd-a467-b9cb8f7a34ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016074853s Jan 11 15:16:35.781: INFO: Pod "client-containers-e061fd46-17c7-4bcd-a467-b9cb8f7a34ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022039892s �[1mSTEP�[0m: Saw pod success Jan 11 15:16:35.781: INFO: Pod "client-containers-e061fd46-17c7-4bcd-a467-b9cb8f7a34ed" satisfied condition "Succeeded or Failed" Jan 11 15:16:35.785: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod client-containers-e061fd46-17c7-4bcd-a467-b9cb8f7a34ed container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:16:35.811: INFO: Waiting for pod client-containers-e061fd46-17c7-4bcd-a467-b9cb8f7a34ed to disappear Jan 11 15:16:35.814: INFO: Pod client-containers-e061fd46-17c7-4bcd-a467-b9cb8f7a34ed no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:35.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-7731" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":735,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:35.849: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of events �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete a list of events Jan 11 15:16:35.889: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:35.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-7394" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":30,"skipped":747,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:35.963: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 15:16:35.989: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9315e7a5-06ae-4b06-957e-676acb6526c2" in namespace "projected-3441" to be "Succeeded or Failed" Jan 11 15:16:35.993: INFO: Pod "downwardapi-volume-9315e7a5-06ae-4b06-957e-676acb6526c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.704229ms Jan 11 15:16:37.999: INFO: Pod "downwardapi-volume-9315e7a5-06ae-4b06-957e-676acb6526c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009576661s Jan 11 15:16:40.004: INFO: Pod "downwardapi-volume-9315e7a5-06ae-4b06-957e-676acb6526c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014674414s �[1mSTEP�[0m: Saw pod success Jan 11 15:16:40.004: INFO: Pod "downwardapi-volume-9315e7a5-06ae-4b06-957e-676acb6526c2" satisfied condition "Succeeded or Failed" Jan 11 15:16:40.008: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2 pod downwardapi-volume-9315e7a5-06ae-4b06-957e-676acb6526c2 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:16:40.035: INFO: Waiting for pod downwardapi-volume-9315e7a5-06ae-4b06-957e-676acb6526c2 to disappear Jan 11 15:16:40.038: INFO: Pod downwardapi-volume-9315e7a5-06ae-4b06-957e-676acb6526c2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:40.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3441" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":777,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:40.059: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Jan 11 15:16:40.093: INFO: Waiting up to 5m0s for pod "pod-c61f3a0e-7349-4c81-bc8d-b2478394ef3a" in namespace "emptydir-5980" to be "Succeeded or Failed" Jan 11 15:16:40.097: INFO: Pod "pod-c61f3a0e-7349-4c81-bc8d-b2478394ef3a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.980711ms Jan 11 15:16:42.102: INFO: Pod "pod-c61f3a0e-7349-4c81-bc8d-b2478394ef3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008888417s Jan 11 15:16:44.108: INFO: Pod "pod-c61f3a0e-7349-4c81-bc8d-b2478394ef3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014345451s �[1mSTEP�[0m: Saw pod success Jan 11 15:16:44.108: INFO: Pod "pod-c61f3a0e-7349-4c81-bc8d-b2478394ef3a" satisfied condition "Succeeded or Failed" Jan 11 15:16:44.113: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod pod-c61f3a0e-7349-4c81-bc8d-b2478394ef3a container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:16:44.132: INFO: Waiting for pod pod-c61f3a0e-7349-4c81-bc8d-b2478394ef3a to disappear Jan 11 15:16:44.136: INFO: Pod pod-c61f3a0e-7349-4c81-bc8d-b2478394ef3a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:44.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5980" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":782,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:44.164: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 15:16:44.205: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c22215f-077c-415f-974f-5b285d8c5256" in namespace "downward-api-1289" to be "Succeeded or Failed" Jan 11 15:16:44.209: INFO: Pod "downwardapi-volume-5c22215f-077c-415f-974f-5b285d8c5256": Phase="Pending", Reason="", readiness=false. Elapsed: 3.479012ms Jan 11 15:16:46.215: INFO: Pod "downwardapi-volume-5c22215f-077c-415f-974f-5b285d8c5256": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009507387s Jan 11 15:16:48.220: INFO: Pod "downwardapi-volume-5c22215f-077c-415f-974f-5b285d8c5256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01515958s �[1mSTEP�[0m: Saw pod success Jan 11 15:16:48.221: INFO: Pod "downwardapi-volume-5c22215f-077c-415f-974f-5b285d8c5256" satisfied condition "Succeeded or Failed" Jan 11 15:16:48.224: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2 pod downwardapi-volume-5c22215f-077c-415f-974f-5b285d8c5256 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:16:48.245: INFO: Waiting for pod downwardapi-volume-5c22215f-077c-415f-974f-5b285d8c5256 to disappear Jan 11 15:16:48.249: INFO: Pod downwardapi-volume-5c22215f-077c-415f-974f-5b285d8c5256 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:48.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1289" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":790,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:48.272: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 11 15:16:48.325: INFO: Waiting up to 5m0s for pod "pod-e7800cef-2670-4b3c-9fc1-fffbd6488fa9" in namespace "emptydir-925" to be "Succeeded or Failed" Jan 11 15:16:48.329: INFO: Pod "pod-e7800cef-2670-4b3c-9fc1-fffbd6488fa9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.673138ms Jan 11 15:16:50.334: INFO: Pod "pod-e7800cef-2670-4b3c-9fc1-fffbd6488fa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008606514s Jan 11 15:16:52.338: INFO: Pod "pod-e7800cef-2670-4b3c-9fc1-fffbd6488fa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013039648s �[1mSTEP�[0m: Saw pod success Jan 11 15:16:52.338: INFO: Pod "pod-e7800cef-2670-4b3c-9fc1-fffbd6488fa9" satisfied condition "Succeeded or Failed" Jan 11 15:16:52.342: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod pod-e7800cef-2670-4b3c-9fc1-fffbd6488fa9 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:16:52.362: INFO: Waiting for pod pod-e7800cef-2670-4b3c-9fc1-fffbd6488fa9 to disappear Jan 11 15:16:52.365: INFO: Pod pod-e7800cef-2670-4b3c-9fc1-fffbd6488fa9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:52.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-925" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":796,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":48,"skipped":1162,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:04.432: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:16:05.176: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:16:08.205: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API Jan 11 15:16:18.227: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:28.340: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:38.443: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:48.540: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:58.558: INFO: Waiting for webhook configuration to be ready... Jan 11 15:16:58.558: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForAttachingPod(0xc0002e0160, {0xc003bc28c0, 0xc}, 0xc004314690, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 +0x74a k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:207 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000da6340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:16:58.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8364" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8364-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.207 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny attaching pod [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:16:58.558: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":48,"skipped":1162,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:58.666: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:16:58.739: INFO: Waiting up to 5m0s for pod "busybox-user-65534-c022fd0f-d8c5-4863-8666-843d37070900" in namespace "security-context-test-9126" to be "Succeeded or Failed" Jan 11 15:16:58.762: INFO: Pod "busybox-user-65534-c022fd0f-d8c5-4863-8666-843d37070900": Phase="Pending", Reason="", readiness=false. Elapsed: 22.788477ms Jan 11 15:17:00.767: INFO: Pod "busybox-user-65534-c022fd0f-d8c5-4863-8666-843d37070900": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028736478s Jan 11 15:17:02.773: INFO: Pod "busybox-user-65534-c022fd0f-d8c5-4863-8666-843d37070900": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033855456s Jan 11 15:17:02.773: INFO: Pod "busybox-user-65534-c022fd0f-d8c5-4863-8666-843d37070900" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:17:02.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-9126" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1170,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:16:52.407: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8130.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8130.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8130.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8130.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8130.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8130.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8130.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 39.164.133.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.133.164.39_udp@PTR;check="$$(dig +tcp +noall +answer +search 39.164.133.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.133.164.39_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8130.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8130.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8130.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8130.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8130.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8130.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8130.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8130.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8130.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 39.164.133.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.133.164.39_udp@PTR;check="$$(dig +tcp +noall +answer +search 39.164.133.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.133.164.39_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 11 15:16:54.499: INFO: Unable to read wheezy_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:54.502: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:54.506: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:54.510: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:54.529: INFO: Unable to read jessie_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:54.534: INFO: Unable to read jessie_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:54.537: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:54.540: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:54.555: INFO: Lookups using dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15 failed for: [wheezy_udp@dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_udp@dns-test-service.dns-8130.svc.cluster.local jessie_tcp@dns-test-service.dns-8130.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local] Jan 11 15:16:59.561: INFO: Unable to read wheezy_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:59.565: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:59.568: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:59.572: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:59.593: INFO: Unable to read jessie_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:59.596: INFO: Unable to read jessie_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:59.601: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:59.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:16:59.625: INFO: Lookups using dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15 failed for: [wheezy_udp@dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_udp@dns-test-service.dns-8130.svc.cluster.local jessie_tcp@dns-test-service.dns-8130.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local] Jan 11 15:17:04.560: INFO: Unable to read wheezy_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:04.566: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:04.570: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:04.574: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:04.598: INFO: Unable to read jessie_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:04.603: INFO: Unable to read jessie_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:04.607: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:04.611: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:04.630: INFO: Lookups using dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15 failed for: [wheezy_udp@dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_udp@dns-test-service.dns-8130.svc.cluster.local jessie_tcp@dns-test-service.dns-8130.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local] Jan 11 15:17:09.561: INFO: Unable to read wheezy_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:09.565: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:09.569: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:09.573: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:09.592: INFO: Unable to read jessie_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:09.596: INFO: Unable to read jessie_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:09.599: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:09.603: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:09.620: INFO: Lookups using dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15 failed for: [wheezy_udp@dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_udp@dns-test-service.dns-8130.svc.cluster.local jessie_tcp@dns-test-service.dns-8130.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local] Jan 11 15:17:14.561: INFO: Unable to read wheezy_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:14.565: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:14.569: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:14.573: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:14.598: INFO: Unable to read jessie_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:14.604: INFO: Unable to read jessie_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:14.609: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:14.614: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:14.635: INFO: Lookups using dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15 failed for: [wheezy_udp@dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_udp@dns-test-service.dns-8130.svc.cluster.local jessie_tcp@dns-test-service.dns-8130.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local] Jan 11 15:17:19.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:19.567: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:19.572: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:19.576: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:19.603: INFO: Unable to read jessie_udp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:19.607: INFO: Unable to read jessie_tcp@dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:19.612: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:19.617: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local from pod dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15: the server could not find the requested resource (get pods dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15) Jan 11 15:17:19.636: INFO: Lookups using dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15 failed for: [wheezy_udp@dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@dns-test-service.dns-8130.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_udp@dns-test-service.dns-8130.svc.cluster.local jessie_tcp@dns-test-service.dns-8130.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8130.svc.cluster.local] Jan 11 15:17:24.630: INFO: DNS probes using dns-8130/dns-test-4075b27e-a6a9-4799-ad86-2900f4a04e15 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:17:24.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-8130" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":35,"skipped":809,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":9,"skipped":164,"failed":4,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:15:11.715: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-9597 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-9597 to expose endpoints map[] Jan 11 15:15:11.791: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Jan 11 15:15:12.805: INFO: successfully validated that service multi-endpoint-test in namespace services-9597 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-9597 Jan 11 15:15:12.825: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:14.829: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-9597 to expose endpoints map[pod1:[100]] Jan 11 15:15:14.846: INFO: successfully validated that service multi-endpoint-test in namespace services-9597 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-9597 Jan 11 15:15:14.856: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:16.861: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-9597 to expose endpoints map[pod1:[100] pod2:[101]] Jan 11 15:15:16.880: INFO: successfully validated that service multi-endpoint-test in namespace services-9597 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Jan 11 15:15:16.880: INFO: Creating new exec pod Jan 11 15:15:19.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 11 15:15:20.064: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 11 15:15:20.064: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 11 15:15:20.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.143.128.52 80' Jan 11 15:15:20.224: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.143.128.52 80\nConnection to 10.143.128.52 80 port [tcp/http] succeeded!\n" Jan 11 15:15:20.224: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 11 15:15:20.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:22.382: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:22.382: INFO: stdout: "" Jan 11 15:15:23.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:25.552: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:25.552: INFO: stdout: "" Jan 11 15:15:26.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:28.547: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:28.547: INFO: stdout: "" Jan 11 15:15:29.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:31.544: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:31.544: INFO: stdout: "" Jan 11 15:15:32.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:34.531: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:34.531: INFO: stdout: "" Jan 11 15:15:35.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:37.583: INFO: stderr: "+ + ncecho -v hostName\n -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:37.583: INFO: stdout: "" Jan 11 15:15:38.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:40.539: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:40.539: INFO: stdout: "" Jan 11 15:15:41.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:43.535: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:43.535: INFO: stdout: "" Jan 11 15:15:44.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:46.527: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:46.527: INFO: stdout: "" Jan 11 15:15:47.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:49.547: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:49.547: INFO: stdout: "" Jan 11 15:15:50.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:52.557: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:52.557: INFO: stdout: "" Jan 11 15:15:53.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:55.554: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:55.554: INFO: stdout: "" Jan 11 15:15:56.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:15:58.560: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:15:58.560: INFO: stdout: "" Jan 11 15:15:59.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:01.567: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:01.567: INFO: stdout: "" Jan 11 15:16:02.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:04.548: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:04.548: INFO: stdout: "" Jan 11 15:16:05.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:07.554: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:07.554: INFO: stdout: "" Jan 11 15:16:08.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:10.744: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:10.744: INFO: stdout: "" Jan 11 15:16:11.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:13.539: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:13.539: INFO: stdout: "" Jan 11 15:16:14.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:16.558: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:16.558: INFO: stdout: "" Jan 11 15:16:17.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:19.570: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:19.570: INFO: stdout: "" Jan 11 15:16:20.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:22.545: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:22.545: INFO: stdout: "" Jan 11 15:16:23.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:25.556: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:25.556: INFO: stdout: "" Jan 11 15:16:26.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:28.651: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:28.651: INFO: stdout: "" Jan 11 15:16:29.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:31.583: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:31.583: INFO: stdout: "" Jan 11 15:16:32.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:34.570: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:34.570: INFO: stdout: "" Jan 11 15:16:35.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:37.580: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:37.580: INFO: stdout: "" Jan 11 15:16:38.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:40.543: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:40.543: INFO: stdout: "" Jan 11 15:16:41.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:43.555: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:43.555: INFO: stdout: "" Jan 11 15:16:44.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:46.563: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:46.563: INFO: stdout: "" Jan 11 15:16:47.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:49.586: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:49.586: INFO: stdout: "" Jan 11 15:16:50.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:52.556: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:52.556: INFO: stdout: "" Jan 11 15:16:53.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:55.612: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:55.612: INFO: stdout: "" Jan 11 15:16:56.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:16:58.556: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:16:58.556: INFO: stdout: "" Jan 11 15:16:59.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:01.554: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:01.554: INFO: stdout: "" Jan 11 15:17:02.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:04.549: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:04.549: INFO: stdout: "" Jan 11 15:17:05.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:07.581: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:07.581: INFO: stdout: "" Jan 11 15:17:08.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:10.576: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:10.576: INFO: stdout: "" Jan 11 15:17:11.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:13.540: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:13.540: INFO: stdout: "" Jan 11 15:17:14.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:16.544: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:16.544: INFO: stdout: "" Jan 11 15:17:17.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:19.584: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:19.584: INFO: stdout: "" Jan 11 15:17:20.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:22.533: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:22.534: INFO: stdout: "" Jan 11 15:17:22.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9597 exec execpodrpkps -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:24.723: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:24.723: INFO: stdout: "" Jan 11 15:17:24.723: FAIL: Unexpected error: <*errors.errorString | 0xc0031a6040>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 +0x7c6 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005f8340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:17:24.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9597" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [133.278 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould serve multiport endpoints from pods [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:17:24.723: Unexpected error: <*errors.errorString | 0xc0031a6040>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:17:24.934: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-922.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-922.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-922.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-922.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 11 15:17:29.037: INFO: DNS probes using dns-922/dns-test-90bee44a-5a6c-47f6-88a7-895705773fad succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:17:29.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-922" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":36,"skipped":831,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:17:29.063: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-8585 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Looking for a node to schedule stateful set and pod �[1mSTEP�[0m: Creating pod with conflicting port in namespace statefulset-8585 �[1mSTEP�[0m: Waiting until pod test-pod will start running in namespace statefulset-8585 �[1mSTEP�[0m: Creating statefulset with conflicting port in namespace statefulset-8585 �[1mSTEP�[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8585 Jan 11 15:17:31.146: INFO: Observed stateful pod in namespace: statefulset-8585, name: ss-0, uid: d9f2211e-fa7d-495f-8f74-6391dfc68d62, status phase: Pending. Waiting for statefulset controller to delete. Jan 11 15:17:31.171: INFO: Observed stateful pod in namespace: statefulset-8585, name: ss-0, uid: d9f2211e-fa7d-495f-8f74-6391dfc68d62, status phase: Failed. Waiting for statefulset controller to delete. Jan 11 15:17:31.195: INFO: Observed stateful pod in namespace: statefulset-8585, name: ss-0, uid: d9f2211e-fa7d-495f-8f74-6391dfc68d62, status phase: Failed. Waiting for statefulset controller to delete. Jan 11 15:17:31.200: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8585 �[1mSTEP�[0m: Removing pod with conflicting port in namespace statefulset-8585 �[1mSTEP�[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8585 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 11 15:17:33.237: INFO: Deleting all statefulset in ns statefulset-8585 Jan 11 15:17:33.240: INFO: Scaling statefulset ss to 0 Jan 11 15:17:43.261: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 15:17:43.264: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:17:43.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8585" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":37,"skipped":832,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:17:43.301: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:17:43.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":38,"skipped":846,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:17:43.386: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name secret-emptykey-test-c9133f59-6c57-447c-ae48-ad5affc7c98f [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:17:43.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2598" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":39,"skipped":882,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:17:43.462: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:17:43.483: INFO: Creating pod... Jan 11 15:17:43.496: INFO: Pod Quantity: 1 Status: Pending Jan 11 15:17:44.500: INFO: Pod Status: Running Jan 11 15:17:44.501: INFO: Creating service... Jan 11 15:17:44.511: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/pods/agnhost/proxy/some/path/with/DELETE Jan 11 15:17:44.521: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jan 11 15:17:44.521: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/pods/agnhost/proxy/some/path/with/GET Jan 11 15:17:44.529: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jan 11 15:17:44.530: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/pods/agnhost/proxy/some/path/with/HEAD Jan 11 15:17:44.540: INFO: http.Client request:HEAD | StatusCode:200 Jan 11 15:17:44.540: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/pods/agnhost/proxy/some/path/with/OPTIONS Jan 11 15:17:44.550: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jan 11 15:17:44.550: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/pods/agnhost/proxy/some/path/with/PATCH Jan 11 15:17:44.557: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jan 11 15:17:44.557: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/pods/agnhost/proxy/some/path/with/POST Jan 11 15:17:44.562: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jan 11 15:17:44.562: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/pods/agnhost/proxy/some/path/with/PUT Jan 11 15:17:44.568: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Jan 11 15:17:44.568: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/services/test-service/proxy/some/path/with/DELETE Jan 11 15:17:44.577: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jan 11 15:17:44.577: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/services/test-service/proxy/some/path/with/GET Jan 11 15:17:44.588: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jan 11 15:17:44.589: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/services/test-service/proxy/some/path/with/HEAD Jan 11 15:17:44.609: INFO: http.Client request:HEAD | StatusCode:200 Jan 11 15:17:44.610: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/services/test-service/proxy/some/path/with/OPTIONS Jan 11 15:17:44.619: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jan 11 15:17:44.619: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/services/test-service/proxy/some/path/with/PATCH Jan 11 15:17:44.631: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jan 11 15:17:44.631: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/services/test-service/proxy/some/path/with/POST Jan 11 15:17:44.644: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jan 11 15:17:44.644: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-7595/services/test-service/proxy/some/path/with/PUT Jan 11 15:17:44.655: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:17:44.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-7595" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":40,"skipped":901,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:17:44.674: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:17:44.714: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 11 15:17:44.731: INFO: The status of Pod pod-logs-websocket-d72c3f56-3aca-4d4c-9a2a-8cf92960b1b9 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:46.735: INFO: The status of Pod pod-logs-websocket-d72c3f56-3aca-4d4c-9a2a-8cf92960b1b9 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:17:46.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5434" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":901,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:17:46.840: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-projected-all-test-volume-9c58eb14-b8e5-42f7-ab3f-b150170e6120 �[1mSTEP�[0m: Creating secret with name secret-projected-all-test-volume-620cf892-7683-40ac-b417-15e62eca757b �[1mSTEP�[0m: Creating a pod to test Check all projections for projected volume plugin Jan 11 15:17:46.901: INFO: Waiting up to 5m0s for pod "projected-volume-c6763f17-cc37-47a8-9a6d-ee35f4d4352a" in namespace "projected-1408" to be "Succeeded or Failed" Jan 11 15:17:46.905: INFO: Pod "projected-volume-c6763f17-cc37-47a8-9a6d-ee35f4d4352a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.762419ms Jan 11 15:17:48.909: INFO: Pod "projected-volume-c6763f17-cc37-47a8-9a6d-ee35f4d4352a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007578927s Jan 11 15:17:50.916: INFO: Pod "projected-volume-c6763f17-cc37-47a8-9a6d-ee35f4d4352a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01479288s �[1mSTEP�[0m: Saw pod success Jan 11 15:17:50.916: INFO: Pod "projected-volume-c6763f17-cc37-47a8-9a6d-ee35f4d4352a" satisfied condition "Succeeded or Failed" Jan 11 15:17:50.920: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod projected-volume-c6763f17-cc37-47a8-9a6d-ee35f4d4352a container projected-all-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:17:50.940: INFO: Waiting for pod projected-volume-c6763f17-cc37-47a8-9a6d-ee35f4d4352a to disappear Jan 11 15:17:50.943: INFO: Pod projected-volume-c6763f17-cc37-47a8-9a6d-ee35f4d4352a no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:17:50.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1408" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":941,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:17:50.970: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps with a certain label �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: changing the label value of the configmap �[1mSTEP�[0m: Expecting to observe a delete notification for the watched object Jan 11 15:17:51.014: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5816 39f43976-7ac6-48c8-bc6a-6816c7dadfdc 10166 0 2023-01-11 15:17:50 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-11 15:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 15:17:51.015: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5816 39f43976-7ac6-48c8-bc6a-6816c7dadfdc 10168 0 2023-01-11 15:17:50 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-11 15:17:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 15:17:51.015: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5816 39f43976-7ac6-48c8-bc6a-6816c7dadfdc 10169 0 2023-01-11 15:17:50 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-11 15:17:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: Expecting not to observe a notification because the object no longer meets the selector's requirements �[1mSTEP�[0m: changing the label value of the configmap back �[1mSTEP�[0m: modifying the configmap a third time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe an add notification for the watched object when the label value was restored Jan 11 15:18:01.050: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5816 39f43976-7ac6-48c8-bc6a-6816c7dadfdc 10208 0 2023-01-11 15:17:50 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-11 15:17:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 15:18:01.051: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5816 39f43976-7ac6-48c8-bc6a-6816c7dadfdc 10209 0 2023-01-11 15:17:50 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-11 15:17:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 15:18:01.051: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5816 39f43976-7ac6-48c8-bc6a-6816c7dadfdc 10210 0 2023-01-11 15:17:50 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-11 15:17:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:01.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-5816" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":43,"skipped":949,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:01.091: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 11 15:18:01.132: INFO: The status of Pod labelsupdate46e2ed23-df67-4bef-a271-005c26cc62de is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:18:03.136: INFO: The status of Pod labelsupdate46e2ed23-df67-4bef-a271-005c26cc62de is Running (Ready = true) Jan 11 15:18:03.658: INFO: Successfully updated pod "labelsupdate46e2ed23-df67-4bef-a271-005c26cc62de" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:07.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-711" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":968,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:17:02.828: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Jan 11 15:17:02.854: INFO: >>> kubeConfig: /tmp/kubeconfig [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the sample API server. Jan 11 15:17:03.189: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 11 15:17:05.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 11, 15, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 17, 3, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 17, 3, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 17, 3, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 15:18:07.477: INFO: Waited 1m0.209220703s for the sample-apiserver to be ready to handle requests. Jan 11 15:18:07.477: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"5689f14a-dbd9-4f73-bb5c-3848b04b9d34","resourceVersion":"10243","creationTimestamp":"2023-01-11T15:17:07Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2023-01-11T15:17:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2023-01-11T15:17:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"}]},"spec":{"service":{"namespace":"aggregator-3294","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpNd01URXhNVFV4TnpBeldoY05Nek13TVRBNE1UVXhOekF6V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUUNpZ2dFeUpCRmpseThMVVhLd1JqUng0Ly9QdjZsVmZuOGQwNEhWVVhVSHJ0d28Kb3UrNGlzMldpQmZZQkw1V2xBVVlMQzBZSDQzM0NuSlBVUnNwZmQ4SS83eWszNVFNZ25UMFdJb3VIdXdsNGVFRQpWQ0NuTGlydmVBMWxYdHd3amgwQk1pZ1ZCZ1J5N0FvdGdNa0Y5QXZUeFJpM0U4WXA3aFp6Y3FCb0Q5YXQrMTFlCnpEOUVRNU02T2wxS3BiSkp2MkNYTVRnNVk4dUw1ZFFzSy9xRFpQK0VOdWhtMTgvWUxKOFhZWDlCY2Z4TXdXcmQKdHBtM2IxWjlYaW5PcFZtQ3hpMTdiU1ZWNmExNTJ5TTFaNXNlT1g0SG52RnlyT1dSOFBYTHdPSlZ0a1dPYkFOOQpra1lLcWNDYWhDV044UHFaOFpYd1NkZWJPYzV2U3lDeHFSMjd1eXhQQWdNQkFBR2pZVEJmTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTcmxVK1dxbDRjQnIzcHFldUsKQmFWMnU5bDliVEFkQmdOVkhSRUVGakFVZ2hKbE1tVXRjMlZ5ZG1WeUxXTmxjblF0WTJFd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBREdqZkdHWXdBU2JRM0JZZFQ5ZGZBeDN1bGJycTlaVWR3SGdzZ1NiaTd4Y2ZuQXJFYXpKCjRwNzhDbjFYaGJHVnowNXZWQ2Q5SjQ2dUw3VjVCVnhzU245NlhQeUNrWHZBTERJRXdTNGcraUwxLzI2aUpYNm4KYkVvenVoTjBNTzhqeE93MzNLWHhMWUlsNXVnOHNhQVUybEtjbEJjTGs2dDFEZXp0QWNyUDU5NEM2ME9VVnBQMgp0bnpSbStNRG15bithUVpGaFY5RmZVa01memxXVldWV1RVRFJ3TGpBb0Y5UnJ4YzltSFNWanBqcHExR2VaTkYwCmc0OXNoMFA4RGFjNmt5U2FjUTdBZWEzY3FENSs5MUR0UENaRmZxdmhmWTRrRzZmbzUyMmgvZ29nQ2tQRGtOeW4KNlRzNnorbnM1U2JERDhmT1pFTWV3K2ZJYWVFVjJHRWhaZnM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2023-01-11T15:17:07Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.134.126.198:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.134.126.198:7443/apis/wardle.example.com/v1alpha1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}} Jan 11 15:18:07.481: INFO: current pods: {"metadata":{"resourceVersion":"10244"},"items":[{"metadata":{"name":"sample-apiserver-deployment-7cdc9f5bf7-kjvkj","generateName":"sample-apiserver-deployment-7cdc9f5bf7-","namespace":"aggregator-3294","uid":"a3745632-b447-47c4-b448-57aaec5362eb","resourceVersion":"9736","creationTimestamp":"2023-01-11T15:17:03Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"7cdc9f5bf7"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-7cdc9f5bf7","uid":"d72031e2-f196-45ab-b3d6-9d3da0b6ccfe","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-11T15:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d72031e2-f196-45ab-b3d6-9d3da0b6ccfe\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-11T15:17:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.62\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-2s58f","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-2s58f","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.5.6-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-2s58f","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-8jx80k-worker-b15lfw","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-11T15:17:03Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-11T15:17:06Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-11T15:17:06Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-11T15:17:03Z"}],"hostIP":"172.18.0.6","podIP":"192.168.2.62","podIPs":[{"ip":"192.168.2.62"}],"startTime":"2023-01-11T15:17:03Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2023-01-11T15:17:05Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.5.6-0","imageID":"k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c","containerID":"containerd://a9dbdfbfee052611317870d1e40f4658aebf94f0156ee8475c8bbd8435973399","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2023-01-11T15:17:05Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f","containerID":"containerd://c10d2652c19a552cea7e10be78ee9ee0aea5b22d69a319e1e661a88fce9b32d0","started":true}],"qosClass":"BestEffort"}}]} Jan 11 15:18:07.491: INFO: logs of sample-apiserver-deployment-7cdc9f5bf7-kjvkj/sample-apiserver (error: <nil>): W0111 15:17:05.860608 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found W0111 15:17:05.860785 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found I0111 15:17:05.898311 1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder. I0111 15:17:05.898345 1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook. I0111 15:17:05.900236 1 client.go:361] parsed scheme: "endpoint" I0111 15:17:05.900305 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0111 15:17:05.902764 1 client.go:361] parsed scheme: "endpoint" I0111 15:17:05.903115 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0111 15:17:05.905194 1 client.go:361] parsed scheme: "endpoint" I0111 15:17:05.905240 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0111 15:17:05.907872 1 client.go:361] parsed scheme: "endpoint" I0111 15:17:05.907920 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0111 15:17:05.960178 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0111 15:17:05.960619 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0111 15:17:05.960296 1 secure_serving.go:178] Serving securely on [::]:443 I0111 15:17:05.960339 1 tlsconfig.go:219] Starting DynamicServingCertificateController I0111 15:17:05.960357 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0111 15:17:05.961491 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0111 15:17:05.960343 1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key I0111 15:17:06.061745 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0111 15:17:06.062390 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0111 15:17:06.163146 1 client.go:361] parsed scheme: "endpoint" I0111 15:17:06.163237 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] Jan 11 15:18:07.499: INFO: logs of sample-apiserver-deployment-7cdc9f5bf7-kjvkj/etcd (error: <nil>): {"level":"info","ts":"2023-01-11T15:17:05.518Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"]} {"level":"warn","ts":"2023-01-11T15:17:05.518Z","caller":"etcdmain/etcd.go:105","msg":"'data-dir' was empty; using default","data-dir":"default.etcd"} {"level":"info","ts":"2023-01-11T15:17:05.519Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]} {"level":"info","ts":"2023-01-11T15:17:05.520Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["http://127.0.0.1:2379"]} {"level":"info","ts":"2023-01-11T15:17:05.520Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":false,"name":"default","data-dir":"default.etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"default.etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":100000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://127.0.0.1:2379"],"listen-client-urls":["http://127.0.0.1:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"default=http://localhost:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2023-01-11T15:17:05.526Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"default.etcd/member/snap/db","took":"4.433059ms"} {"level":"info","ts":"2023-01-11T15:17:05.535Z","caller":"etcdserver/raft.go:494","msg":"starting local member","local-member-id":"8e9e05c52164694d","cluster-id":"cdf818194e3a8c32"} {"level":"info","ts":"2023-01-11T15:17:05.535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=()"} {"level":"info","ts":"2023-01-11T15:17:05.535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 0"} {"level":"info","ts":"2023-01-11T15:17:05.535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2023-01-11T15:17:05.535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 1"} {"level":"info","ts":"2023-01-11T15:17:05.535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"} {"level":"warn","ts":"2023-01-11T15:17:05.539Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2023-01-11T15:17:05.543Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2023-01-11T15:17:05.545Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2023-01-11T15:17:05.549Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"8e9e05c52164694d","local-server-version":"3.5.6","cluster-version":"to_be_decided"} {"level":"info","ts":"2023-01-11T15:17:05.549Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-11T15:17:05.549Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-11T15:17:05.549Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-11T15:17:05.549Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8e9e05c52164694d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2023-01-11T15:17:05.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"} {"level":"info","ts":"2023-01-11T15:17:05.551Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]} {"level":"info","ts":"2023-01-11T15:17:05.551Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8e9e05c52164694d","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://127.0.0.1:2379"],"listen-client-urls":["http://127.0.0.1:2379"],"listen-metrics-urls":[]} {"level":"info","ts":"2023-01-11T15:17:05.552Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"127.0.0.1:2380"} {"level":"info","ts":"2023-01-11T15:17:05.552Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"127.0.0.1:2380"} {"level":"info","ts":"2023-01-11T15:17:05.735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d is starting a new election at term 1"} {"level":"info","ts":"2023-01-11T15:17:05.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became pre-candidate at term 1"} {"level":"info","ts":"2023-01-11T15:17:05.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgPreVoteResp from 8e9e05c52164694d at term 1"} {"level":"info","ts":"2023-01-11T15:17:05.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became candidate at term 2"} {"level":"info","ts":"2023-01-11T15:17:05.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2"} {"level":"info","ts":"2023-01-11T15:17:05.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became leader at term 2"} {"level":"info","ts":"2023-01-11T15:17:05.736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2"} {"level":"info","ts":"2023-01-11T15:17:05.738Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2023-01-11T15:17:05.741Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8e9e05c52164694d","local-member-attributes":"{Name:default ClientURLs:[http://127.0.0.1:2379]}","request-path":"/0/members/8e9e05c52164694d/attributes","cluster-id":"cdf818194e3a8c32","publish-timeout":"7s"} {"level":"info","ts":"2023-01-11T15:17:05.741Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-01-11T15:17:05.742Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2023-01-11T15:17:05.742Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2023-01-11T15:17:05.742Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","cluster-version":"3.5"} {"level":"info","ts":"2023-01-11T15:17:05.742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2023-01-11T15:17:05.742Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2023-01-11T15:17:05.743Z","caller":"embed/serve.go:146","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"127.0.0.1:2379"} Jan 11 15:18:07.500: FAIL: gave up waiting for apiservice wardle to come up successfully Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.TestSampleAPIServer(0xc000f091e0, 0xc00410d428, {0xc002233d00, 0x3}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:384 +0x2f9a k8s.io/kubernetes/test/e2e/apimachinery.glob..func1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:101 +0x128 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000da6340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:07.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-3294" for this suite. �[91m�[1m• Failure [65.059 seconds]�[0m [sig-api-machinery] Aggregator �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mShould be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:18:07.500: gave up waiting for apiservice wardle to come up successfully Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:384 �[90m------------------------------�[0m �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:07.927: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Starting the proxy Jan 11 15:18:07.983: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1138 proxy --unix-socket=/tmp/kubectl-proxy-unix4194741749/test' �[1mSTEP�[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:08.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1138" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":45,"skipped":1027,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":49,"skipped":1192,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:07.896: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the sample API server. Jan 11 15:18:08.826: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 11 15:18:10.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 11, 15, 18, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 18, 8, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 18, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 18, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 15:18:13.064: INFO: Waited 161.28168ms for the sample-apiserver to be ready to handle requests. �[1mSTEP�[0m: Read Status for v1alpha1.wardle.example.com �[1mSTEP�[0m: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' �[1mSTEP�[0m: List APIServices Jan 11 15:18:13.146: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:13.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-8025" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":50,"skipped":1192,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:13.674: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:18:13.702: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:14.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-7234" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":51,"skipped":1225,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:14.755: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: creating a watch on configmaps from the resource version returned by the first update �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap after the first update Jan 11 15:18:14.808: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6193 98dbd3f4-22c1-44c2-b2a3-f663dfaf2f1a 10438 0 2023-01-11 15:18:14 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-11 15:18:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 15:18:14.809: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6193 98dbd3f4-22c1-44c2-b2a3-f663dfaf2f1a 10439 0 2023-01-11 15:18:14 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-11 15:18:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:14.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-6193" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":52,"skipped":1235,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:14.829: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 11 15:18:14.859: INFO: Waiting up to 5m0s for pod "downward-api-b329b8e7-d797-4c40-8e64-ad774540a04e" in namespace "downward-api-4501" to be "Succeeded or Failed" Jan 11 15:18:14.862: INFO: Pod "downward-api-b329b8e7-d797-4c40-8e64-ad774540a04e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.268668ms Jan 11 15:18:16.868: INFO: Pod "downward-api-b329b8e7-d797-4c40-8e64-ad774540a04e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00928432s Jan 11 15:18:18.873: INFO: Pod "downward-api-b329b8e7-d797-4c40-8e64-ad774540a04e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014346231s �[1mSTEP�[0m: Saw pod success Jan 11 15:18:18.873: INFO: Pod "downward-api-b329b8e7-d797-4c40-8e64-ad774540a04e" satisfied condition "Succeeded or Failed" Jan 11 15:18:18.876: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod downward-api-b329b8e7-d797-4c40-8e64-ad774540a04e container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:18:18.903: INFO: Waiting for pod downward-api-b329b8e7-d797-4c40-8e64-ad774540a04e to disappear Jan 11 15:18:18.905: INFO: Pod downward-api-b329b8e7-d797-4c40-8e64-ad774540a04e no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:18.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4501" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1239,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:18.954: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 11 15:18:18.987: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:18:20.992: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 11 15:18:21.007: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:18:23.012: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 11 15:18:23.030: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 15:18:23.035: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 15:18:25.036: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 15:18:25.041: INFO: Pod pod-with-poststart-http-hook still exists Jan 11 15:18:27.037: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 11 15:18:27.041: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:27.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-5436" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":1264,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:27.117: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Jan 11 15:18:27.153: INFO: Waiting up to 5m0s for pod "pod-2723b8f3-bc03-40db-b1bb-4f9e6d81dcd9" in namespace "emptydir-3567" to be "Succeeded or Failed" Jan 11 15:18:27.156: INFO: Pod "pod-2723b8f3-bc03-40db-b1bb-4f9e6d81dcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.1331ms Jan 11 15:18:29.160: INFO: Pod "pod-2723b8f3-bc03-40db-b1bb-4f9e6d81dcd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007222809s Jan 11 15:18:31.165: INFO: Pod "pod-2723b8f3-bc03-40db-b1bb-4f9e6d81dcd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011629325s �[1mSTEP�[0m: Saw pod success Jan 11 15:18:31.165: INFO: Pod "pod-2723b8f3-bc03-40db-b1bb-4f9e6d81dcd9" satisfied condition "Succeeded or Failed" Jan 11 15:18:31.168: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4 pod pod-2723b8f3-bc03-40db-b1bb-4f9e6d81dcd9 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:18:31.185: INFO: Waiting for pod pod-2723b8f3-bc03-40db-b1bb-4f9e6d81dcd9 to disappear Jan 11 15:18:31.188: INFO: Pod pod-2723b8f3-bc03-40db-b1bb-4f9e6d81dcd9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:31.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3567" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":1307,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:08.168: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-lsn6 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 11 15:18:08.216: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lsn6" in namespace "subpath-2065" to be "Succeeded or Failed" Jan 11 15:18:08.219: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.540251ms Jan 11 15:18:10.226: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=true. Elapsed: 2.010604607s Jan 11 15:18:12.232: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=true. Elapsed: 4.015921304s Jan 11 15:18:14.240: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=true. Elapsed: 6.023829761s Jan 11 15:18:16.245: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=true. Elapsed: 8.028786158s Jan 11 15:18:18.249: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=true. Elapsed: 10.032795082s Jan 11 15:18:20.253: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=true. Elapsed: 12.037594933s Jan 11 15:18:22.259: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=true. Elapsed: 14.043033642s Jan 11 15:18:24.263: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=true. Elapsed: 16.047656096s Jan 11 15:18:26.268: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=true. Elapsed: 18.052372332s Jan 11 15:18:28.273: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=true. Elapsed: 20.057649979s Jan 11 15:18:30.279: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Running", Reason="", readiness=false. Elapsed: 22.063208174s Jan 11 15:18:32.283: INFO: Pod "pod-subpath-test-projected-lsn6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.067170014s �[1mSTEP�[0m: Saw pod success Jan 11 15:18:32.283: INFO: Pod "pod-subpath-test-projected-lsn6" satisfied condition "Succeeded or Failed" Jan 11 15:18:32.286: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod pod-subpath-test-projected-lsn6 container test-container-subpath-projected-lsn6: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:18:32.307: INFO: Waiting for pod pod-subpath-test-projected-lsn6 to disappear Jan 11 15:18:32.315: INFO: Pod pod-subpath-test-projected-lsn6 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-lsn6 Jan 11 15:18:32.315: INFO: Deleting pod "pod-subpath-test-projected-lsn6" in namespace "subpath-2065" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:32.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-2065" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":46,"skipped":1051,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:32.456: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 11 15:18:36.530: INFO: Expected: &{OK} to match Container's Termination Message: OK -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:18:36.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-4176" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":1113,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":89,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:14:00.927: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 11 15:14:00.971: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:02.976: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 11 15:14:02.991: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:04.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:06.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:08.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:10.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:12.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:14.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:16.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:18.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:20.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:22.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:24.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:26.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:28.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:30.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:32.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:34.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:36.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:38.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:40.994: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:42.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:44.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:46.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:48.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:50.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:52.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:54.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:56.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:14:58.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:00.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:02.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:04.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:06.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:08.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:10.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:12.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:14.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:16.998: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:18.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:20.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:22.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:24.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:26.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:28.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:30.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:32.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:34.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:36.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:38.994: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:40.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:42.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:44.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:46.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:48.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:50.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:52.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:54.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:56.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:15:58.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:00.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:02.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:04.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:06.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:08.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:10.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:12.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:14.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:16.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:18.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:20.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:22.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:24.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:26.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:28.998: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:30.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:32.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:34.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:36.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:38.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:40.994: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:42.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:44.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:46.998: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:48.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:50.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:52.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:54.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:56.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:16:58.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:00.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:02.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:04.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:06.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:08.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:10.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:12.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:14.994: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:16.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:18.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:20.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:22.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:25.012: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:26.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:28.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:30.994: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:32.997: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:34.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:36.996: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:38.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:17:40.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:17:42.995: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:17:44.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:17:46.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:17:48.995: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:17:50.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:17:52.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:17:54.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:17:56.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:17:58.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:00.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:02.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:04.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:06.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:08.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:10.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:13.044: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:14.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:16.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:18.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:20.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:22.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:24.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:26.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:28.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:30.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:33.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:34.995: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:36.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:38.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:40.998: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:42.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:44.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:46.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:48.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:50.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:52.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:54.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:56.996: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:18:58.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:19:00.995: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:19:02.997: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:19:03.004: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:19:03.004: FAIL: Unexpected error: <*errors.errorString | 0xc0002bc2b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc003e19788, 0x7fdc8b308a68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.2(0xc0014e0800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:72 +0x73 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:105 +0x32b k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000186d00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:19:03.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-8196" for this suite. �[91m�[1m• Failure [302.092 seconds]�[0m [sig-node] Container Lifecycle Hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m when create a pod with lifecycle hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44�[0m �[91m�[1mshould execute poststart exec hook properly [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:19:03.004: Unexpected error: <*errors.errorString | 0xc0002bc2b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:36.571: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 11 15:18:36.608: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:18:38.614: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 11 15:18:38.627: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:18:40.633: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 11 15:18:40.642: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:18:40.645: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:18:42.646: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:18:42.651: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:18:44.646: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:18:44.650: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:18:46.646: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:18:46.650: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:18:48.646: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:18:48.651: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:18:50.646: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:18:50.651: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:18:52.645: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:18:52.650: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:18:54.646: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:18:54.651: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:18:56.645: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:18:56.651: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:18:58.646: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:18:58.651: INFO: Pod pod-with-prestop-exec-hook no longer exists �[1mSTEP�[0m: check prestop hook Jan 11 15:19:28.652: FAIL: Timed out after 30.001s. Expected <*errors.errorString | 0xc0042536d0>: { s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"I0111 15:18:37.318433 1 log.go:195] Started HTTP server on port 8080\\nI0111 15:18:37.319620 1 log.go:195] Started UDP server on port 8081\\n\"", } to be nil Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.2(0xc000c39400) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:87 +0x34c k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:121 +0x32b k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000604680, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:19:28.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-6979" for this suite. �[91m�[1m• Failure [52.093 seconds]�[0m [sig-node] Container Lifecycle Hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m when create a pod with lifecycle hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44�[0m �[91m�[1mshould execute prestop exec hook properly [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:19:28.652: Timed out after 30.001s. Expected <*errors.errorString | 0xc0042536d0>: { s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"I0111 15:18:37.318433 1 log.go:195] Started HTTP server on port 8080\\nI0111 15:18:37.319620 1 log.go:195] Started UDP server on port 8081\\n\"", } to be nil�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:87 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":9,"skipped":164,"failed":5,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:17:25.008: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-447 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-447 to expose endpoints map[] Jan 11 15:17:25.111: INFO: successfully validated that service multi-endpoint-test in namespace services-447 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-447 Jan 11 15:17:25.135: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:27.139: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-447 to expose endpoints map[pod1:[100]] Jan 11 15:17:27.157: INFO: successfully validated that service multi-endpoint-test in namespace services-447 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-447 Jan 11 15:17:27.166: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:17:29.172: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-447 to expose endpoints map[pod1:[100] pod2:[101]] Jan 11 15:17:29.195: INFO: successfully validated that service multi-endpoint-test in namespace services-447 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Jan 11 15:17:29.195: INFO: Creating new exec pod Jan 11 15:17:32.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 11 15:17:32.392: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 11 15:17:32.392: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 11 15:17:32.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.69.242 80' Jan 11 15:17:32.612: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.130.69.242 80\nConnection to 10.130.69.242 80 port [tcp/http] succeeded!\n" Jan 11 15:17:32.612: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 11 15:17:32.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:34.832: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:34.832: INFO: stdout: "" Jan 11 15:17:35.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:38.013: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:38.013: INFO: stdout: "" Jan 11 15:17:38.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:40.999: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:40.999: INFO: stdout: "" Jan 11 15:17:41.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:44.020: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:44.021: INFO: stdout: "" Jan 11 15:17:44.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:47.052: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:47.052: INFO: stdout: "" Jan 11 15:17:47.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:50.048: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:50.048: INFO: stdout: "" Jan 11 15:17:50.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:53.025: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:53.025: INFO: stdout: "" Jan 11 15:17:53.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:56.007: INFO: stderr: "+ + ncecho -v -t hostName -w\n 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:56.007: INFO: stdout: "" Jan 11 15:17:56.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:17:59.032: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:17:59.032: INFO: stdout: "" Jan 11 15:17:59.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:02.006: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:02.006: INFO: stdout: "" Jan 11 15:18:02.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:05.023: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:05.023: INFO: stdout: "" Jan 11 15:18:05.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:08.022: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:08.022: INFO: stdout: "" Jan 11 15:18:08.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:11.045: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:11.045: INFO: stdout: "" Jan 11 15:18:11.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:14.000: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:14.000: INFO: stdout: "" Jan 11 15:18:14.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:17.012: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:17.012: INFO: stdout: "" Jan 11 15:18:17.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:20.051: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 81\n+ echo hostName\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:20.051: INFO: stdout: "" Jan 11 15:18:20.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:23.016: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:23.016: INFO: stdout: "" Jan 11 15:18:23.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:25.988: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:25.988: INFO: stdout: "" Jan 11 15:18:26.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:28.999: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:28.999: INFO: stdout: "" Jan 11 15:18:29.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:31.997: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:31.997: INFO: stdout: "" Jan 11 15:18:32.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:34.989: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:34.989: INFO: stdout: "" Jan 11 15:18:35.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:38.001: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:38.001: INFO: stdout: "" Jan 11 15:18:38.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:41.033: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:41.033: INFO: stdout: "" Jan 11 15:18:41.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:43.985: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:43.985: INFO: stdout: "" Jan 11 15:18:44.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:46.998: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:46.998: INFO: stdout: "" Jan 11 15:18:47.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:49.994: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:49.995: INFO: stdout: "" Jan 11 15:18:50.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:52.997: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:52.997: INFO: stdout: "" Jan 11 15:18:53.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:55.989: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:55.989: INFO: stdout: "" Jan 11 15:18:56.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:18:58.999: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:18:58.999: INFO: stdout: "" Jan 11 15:18:59.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:01.997: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:01.997: INFO: stdout: "" Jan 11 15:19:02.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:05.003: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:05.003: INFO: stdout: "" Jan 11 15:19:05.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:07.999: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:07.999: INFO: stdout: "" Jan 11 15:19:08.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:11.029: INFO: stderr: "+ + echonc -v hostName -t\n -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:11.029: INFO: stdout: "" Jan 11 15:19:11.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:14.050: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:14.050: INFO: stdout: "" Jan 11 15:19:14.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:16.989: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:16.989: INFO: stdout: "" Jan 11 15:19:17.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:20.033: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:20.033: INFO: stdout: "" Jan 11 15:19:20.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:23.012: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:23.012: INFO: stdout: "" Jan 11 15:19:23.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:25.995: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:25.995: INFO: stdout: "" Jan 11 15:19:26.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:29.013: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:29.013: INFO: stdout: "" Jan 11 15:19:29.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:32.007: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:32.007: INFO: stdout: "" Jan 11 15:19:32.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:35.009: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:35.009: INFO: stdout: "" Jan 11 15:19:35.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-447 exec execpodm8hqj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 11 15:19:37.183: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 11 15:19:37.183: INFO: stdout: "" Jan 11 15:19:37.184: FAIL: Unexpected error: <*errors.errorString | 0xc000ed6310>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 +0x7c6 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005f8340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:19:37.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-447" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [132.316 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould serve multiport endpoints from pods [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:19:37.184: Unexpected error: <*errors.errorString | 0xc000ed6310>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:81 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":9,"skipped":164,"failed":6,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:19:37.374: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:19:37.786: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:19:40.808: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that should be mutated �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:19:40.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8313" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8313-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":10,"skipped":186,"failed":6,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:19:41.154: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-70cbffb2-533f-4209-8871-0873fc5df424 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 15:19:41.187: INFO: Waiting up to 5m0s for pod "pod-secrets-b472277c-c85d-499c-a0b2-dc5764a33f85" in namespace "secrets-5822" to be "Succeeded or Failed" Jan 11 15:19:41.191: INFO: Pod "pod-secrets-b472277c-c85d-499c-a0b2-dc5764a33f85": Phase="Pending", Reason="", readiness=false. Elapsed: 3.957305ms Jan 11 15:19:43.197: INFO: Pod "pod-secrets-b472277c-c85d-499c-a0b2-dc5764a33f85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009417183s Jan 11 15:19:45.201: INFO: Pod "pod-secrets-b472277c-c85d-499c-a0b2-dc5764a33f85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013898229s �[1mSTEP�[0m: Saw pod success Jan 11 15:19:45.201: INFO: Pod "pod-secrets-b472277c-c85d-499c-a0b2-dc5764a33f85" satisfied condition "Succeeded or Failed" Jan 11 15:19:45.204: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod pod-secrets-b472277c-c85d-499c-a0b2-dc5764a33f85 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:19:45.222: INFO: Waiting for pod pod-secrets-b472277c-c85d-499c-a0b2-dc5764a33f85 to disappear Jan 11 15:19:45.225: INFO: Pod pod-secrets-b472277c-c85d-499c-a0b2-dc5764a33f85 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:19:45.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-5822" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":234,"failed":6,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:19:45.238: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:19:46.050: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:19:49.071: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a mutating webhook configuration �[1mSTEP�[0m: Updating a mutating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that should not be mutated �[1mSTEP�[0m: Patching a mutating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:19:49.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7037" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7037-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":12,"skipped":236,"failed":6,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:19:49.287: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-504182ee-2942-422c-af0f-9ec658dc8f8c �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 15:19:49.322: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8ce44df7-c605-4be3-9ec3-28ebf55cb770" in namespace "projected-9116" to be "Succeeded or Failed" Jan 11 15:19:49.325: INFO: Pod "pod-projected-configmaps-8ce44df7-c605-4be3-9ec3-28ebf55cb770": Phase="Pending", Reason="", readiness=false. Elapsed: 3.417836ms Jan 11 15:19:51.330: INFO: Pod "pod-projected-configmaps-8ce44df7-c605-4be3-9ec3-28ebf55cb770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007870745s Jan 11 15:19:53.334: INFO: Pod "pod-projected-configmaps-8ce44df7-c605-4be3-9ec3-28ebf55cb770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012591077s �[1mSTEP�[0m: Saw pod success Jan 11 15:19:53.335: INFO: Pod "pod-projected-configmaps-8ce44df7-c605-4be3-9ec3-28ebf55cb770" satisfied condition "Succeeded or Failed" Jan 11 15:19:53.338: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod pod-projected-configmaps-8ce44df7-c605-4be3-9ec3-28ebf55cb770 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:19:53.354: INFO: Waiting for pod pod-projected-configmaps-8ce44df7-c605-4be3-9ec3-28ebf55cb770 to disappear Jan 11 15:19:53.357: INFO: Pod pod-projected-configmaps-8ce44df7-c605-4be3-9ec3-28ebf55cb770 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:19:53.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9116" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":275,"failed":6,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:19:53.389: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:19:53.959: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:19:56.982: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io/v1 discovery document �[1mSTEP�[0m: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:19:56.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5134" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5134-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":14,"skipped":291,"failed":6,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":1118,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:19:28.666: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 11 15:19:28.705: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:30.711: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 11 15:19:30.724: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:32.728: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 11 15:19:32.737: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:19:32.740: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:19:34.741: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:19:34.744: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:19:36.741: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:19:36.745: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:19:38.741: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:19:38.744: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:19:40.741: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:19:40.746: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:19:42.741: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:19:42.751: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:19:44.741: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:19:44.745: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:19:46.741: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:19:46.745: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:19:48.740: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:19:48.745: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:19:50.742: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:19:50.745: INFO: Pod pod-with-prestop-exec-hook no longer exists �[1mSTEP�[0m: check prestop hook Jan 11 15:20:20.747: FAIL: Timed out after 30.001s. Expected <*errors.errorString | 0xc003c7e090>: { s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"I0111 15:19:29.414022 1 log.go:195] Started HTTP server on port 8080\\nI0111 15:19:29.415154 1 log.go:195] Started UDP server on port 8081\\n\"", } to be nil Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.2(0xc001069000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:87 +0x34c k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:121 +0x32b k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000604680, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:20:20.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-7102" for this suite. �[91m�[1m• Failure [52.093 seconds]�[0m [sig-node] Container Lifecycle Hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m when create a pod with lifecycle hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44�[0m �[91m�[1mshould execute prestop exec hook properly [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:20:20.747: Timed out after 30.001s. Expected <*errors.errorString | 0xc003c7e090>: { s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"I0111 15:19:29.414022 1 log.go:195] Started HTTP server on port 8080\\nI0111 15:19:29.415154 1 log.go:195] Started UDP server on port 8081\\n\"", } to be nil�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:87 �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":1118,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:20:20.760: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 11 15:20:20.808: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:22.813: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 11 15:20:22.827: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:24.833: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 11 15:20:24.842: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:20:24.846: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:20:26.847: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:20:26.853: INFO: Pod pod-with-prestop-exec-hook still exists Jan 11 15:20:28.847: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 11 15:20:28.852: INFO: Pod pod-with-prestop-exec-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:20:28.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-7868" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":1118,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:20:28.901: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 11 15:20:28.934: INFO: Waiting up to 5m0s for pod "pod-db507c73-e26c-4799-9c0e-fad1a11d2d3a" in namespace "emptydir-7307" to be "Succeeded or Failed" Jan 11 15:20:28.937: INFO: Pod "pod-db507c73-e26c-4799-9c0e-fad1a11d2d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.268308ms Jan 11 15:20:30.942: INFO: Pod "pod-db507c73-e26c-4799-9c0e-fad1a11d2d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007872986s Jan 11 15:20:32.949: INFO: Pod "pod-db507c73-e26c-4799-9c0e-fad1a11d2d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014655398s �[1mSTEP�[0m: Saw pod success Jan 11 15:20:32.949: INFO: Pod "pod-db507c73-e26c-4799-9c0e-fad1a11d2d3a" satisfied condition "Succeeded or Failed" Jan 11 15:20:32.952: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod pod-db507c73-e26c-4799-9c0e-fad1a11d2d3a container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:20:32.971: INFO: Waiting for pod pod-db507c73-e26c-4799-9c0e-fad1a11d2d3a to disappear Jan 11 15:20:32.974: INFO: Pod pod-db507c73-e26c-4799-9c0e-fad1a11d2d3a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:20:32.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7307" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1136,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:20:33.029: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-5b97d287-8d26-464c-ad91-fa1c6600bfe2 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 15:20:33.073: INFO: Waiting up to 5m0s for pod "pod-configmaps-6e0b8de3-b70b-4c0c-a9bd-792d8cdf67eb" in namespace "configmap-7201" to be "Succeeded or Failed" Jan 11 15:20:33.079: INFO: Pod "pod-configmaps-6e0b8de3-b70b-4c0c-a9bd-792d8cdf67eb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.095646ms Jan 11 15:20:35.086: INFO: Pod "pod-configmaps-6e0b8de3-b70b-4c0c-a9bd-792d8cdf67eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012145388s Jan 11 15:20:37.091: INFO: Pod "pod-configmaps-6e0b8de3-b70b-4c0c-a9bd-792d8cdf67eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01720767s �[1mSTEP�[0m: Saw pod success Jan 11 15:20:37.091: INFO: Pod "pod-configmaps-6e0b8de3-b70b-4c0c-a9bd-792d8cdf67eb" satisfied condition "Succeeded or Failed" Jan 11 15:20:37.095: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod pod-configmaps-6e0b8de3-b70b-4c0c-a9bd-792d8cdf67eb container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:20:37.112: INFO: Waiting for pod pod-configmaps-6e0b8de3-b70b-4c0c-a9bd-792d8cdf67eb to disappear Jan 11 15:20:37.115: INFO: Pod pod-configmaps-6e0b8de3-b70b-4c0c-a9bd-792d8cdf67eb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:20:37.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-7201" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1159,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:20:37.139: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ReplicationController �[1mSTEP�[0m: waiting for RC to be added �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: patching ReplicationController �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: patching ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: fetching ReplicationController status �[1mSTEP�[0m: patching ReplicationController scale �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for ReplicationController's scale to be the max amount �[1mSTEP�[0m: fetching ReplicationController; ensuring that it's patched �[1mSTEP�[0m: updating ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: listing all ReplicationControllers �[1mSTEP�[0m: checking that ReplicationController has expected values �[1mSTEP�[0m: deleting ReplicationControllers by collection �[1mSTEP�[0m: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:20:39.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-3472" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":51,"skipped":1164,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:20:39.101: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 11 15:20:39.140: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Jan 11 15:20:39.143: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 11 15:20:39.158: INFO: waiting for watch events with expected annotations Jan 11 15:20:39.158: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:20:39.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-4929" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":52,"skipped":1171,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:20:39.263: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pod templates Jan 11 15:20:39.297: INFO: created test-podtemplate-1 Jan 11 15:20:39.303: INFO: created test-podtemplate-2 Jan 11 15:20:39.308: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Jan 11 15:20:39.314: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Jan 11 15:20:39.328: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:20:39.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-5507" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":53,"skipped":1201,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:19:57.106: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:19:57.451: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:20:00.476: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API Jan 11 15:20:10.499: INFO: Waiting for webhook configuration to be ready... Jan 11 15:20:20.611: INFO: Waiting for webhook configuration to be ready... Jan 11 15:20:30.716: INFO: Waiting for webhook configuration to be ready... Jan 11 15:20:40.816: INFO: Waiting for webhook configuration to be ready... Jan 11 15:20:50.828: INFO: Waiting for webhook configuration to be ready... Jan 11 15:20:50.828: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002442b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerValidatingWebhookForWebhookConfigurations(0xc000f9af20, {0xc0015fb4e8, 0x14}, 0xc003595310, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1339 +0x7ca k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.10() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:275 +0x73 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005f8340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:20:50.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7111" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7111-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [53.796 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:20:50.828: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002442b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1339 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:20:39.403: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Jan 11 15:20:39.431: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:20:56.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-331" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":54,"skipped":1233,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:20:56.270: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: starting the proxy server Jan 11 15:20:56.294: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9471 proxy -p 0 --disable-filter' �[1mSTEP�[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:20:56.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9471" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":55,"skipped":1236,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:20:56.428: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-bpbs �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 11 15:20:56.474: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bpbs" in namespace "subpath-9320" to be "Succeeded or Failed" Jan 11 15:20:56.478: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Pending", Reason="", readiness=false. Elapsed: 3.133358ms Jan 11 15:20:58.482: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=true. Elapsed: 2.00743895s Jan 11 15:21:00.487: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=true. Elapsed: 4.012420286s Jan 11 15:21:02.492: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=true. Elapsed: 6.017466911s Jan 11 15:21:04.496: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=true. Elapsed: 8.021781788s Jan 11 15:21:06.501: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=true. Elapsed: 10.026746475s Jan 11 15:21:08.508: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=true. Elapsed: 12.033219376s Jan 11 15:21:10.514: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=true. Elapsed: 14.0395103s Jan 11 15:21:12.520: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=true. Elapsed: 16.045662164s Jan 11 15:21:14.524: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=true. Elapsed: 18.04989669s Jan 11 15:21:16.529: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=true. Elapsed: 20.054179319s Jan 11 15:21:18.533: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Running", Reason="", readiness=false. Elapsed: 22.058621927s Jan 11 15:21:20.538: INFO: Pod "pod-subpath-test-downwardapi-bpbs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063302747s �[1mSTEP�[0m: Saw pod success Jan 11 15:21:20.538: INFO: Pod "pod-subpath-test-downwardapi-bpbs" satisfied condition "Succeeded or Failed" Jan 11 15:21:20.542: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-r73y4c pod pod-subpath-test-downwardapi-bpbs container test-container-subpath-downwardapi-bpbs: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:21:20.561: INFO: Waiting for pod pod-subpath-test-downwardapi-bpbs to disappear Jan 11 15:21:20.568: INFO: Pod pod-subpath-test-downwardapi-bpbs no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-bpbs Jan 11 15:21:20.568: INFO: Deleting pod "pod-subpath-test-downwardapi-bpbs" in namespace "subpath-9320" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:21:20.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-9320" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":56,"skipped":1262,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:21:20.636: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 11 15:21:20.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9149 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Jan 11 15:21:20.766: INFO: stderr: "" Jan 11 15:21:20.766: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: replace the image in the pod with server-side dry-run Jan 11 15:21:20.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9149 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' Jan 11 15:21:21.418: INFO: stderr: "" Jan 11 15:21:21.418: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 11 15:21:21.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9149 delete pods e2e-test-httpd-pod' Jan 11 15:21:23.066: INFO: stderr: "" Jan 11 15:21:23.066: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:21:23.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9149" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":57,"skipped":1295,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":14,"skipped":305,"failed":7,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:20:50.904: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:20:51.517: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:20:54.545: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API Jan 11 15:21:04.567: INFO: Waiting for webhook configuration to be ready... Jan 11 15:21:14.679: INFO: Waiting for webhook configuration to be ready... Jan 11 15:21:24.786: INFO: Waiting for webhook configuration to be ready... Jan 11 15:21:34.881: INFO: Waiting for webhook configuration to be ready... Jan 11 15:21:44.892: INFO: Waiting for webhook configuration to be ready... Jan 11 15:21:44.892: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002442b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerValidatingWebhookForWebhookConfigurations(0xc000f9af20, {0xc0042eebe8, 0x13}, 0xc0035952c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1339 +0x7ca k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.10() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:275 +0x73 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005f8340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:21:44.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-583" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-583-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.078 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:21:44.892: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002442b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1339 �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:18:31.202: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod test-webserver-e4e5deb3-e2b5-4a39-abd6-1fd81439a9b9 in namespace container-probe-6724 Jan 11 15:18:33.242: INFO: Started pod test-webserver-e4e5deb3-e2b5-4a39-abd6-1fd81439a9b9 in namespace container-probe-6724 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 11 15:18:33.245: INFO: Initial restart count of pod test-webserver-e4e5deb3-e2b5-4a39-abd6-1fd81439a9b9 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:22:33.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-6724" for this suite. �[32m• [SLOW TEST:242.684 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":1308,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:21:23.087: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-4503 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Initializing watcher for selector baz=blah,foo=bar �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-4503 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-4503 Jan 11 15:21:23.138: INFO: Found 0 stateful pods, waiting for 1 Jan 11 15:21:33.145: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 11 15:21:33.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4503 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 15:21:33.315: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 15:21:33.315: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 15:21:33.315: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 15:21:33.319: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 11 15:21:43.326: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 15:21:43.326: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 15:21:43.346: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999542s Jan 11 15:21:44.351: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995501138s Jan 11 15:21:45.357: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991435919s Jan 11 15:21:46.362: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985314758s Jan 11 15:21:47.367: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980750352s Jan 11 15:21:48.372: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.975817043s Jan 11 15:21:49.382: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.970324677s Jan 11 15:21:50.385: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.960513028s Jan 11 15:21:51.390: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.956899334s Jan 11 15:21:52.395: INFO: Verifying statefulset ss doesn't scale past 1 for another 952.395848ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4503 Jan 11 15:21:53.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4503 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 15:21:53.585: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 15:21:53.585: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 15:21:53.585: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 15:21:53.589: INFO: Found 1 stateful pods, waiting for 3 Jan 11 15:22:03.595: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 15:22:03.595: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 15:22:03.595: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Verifying that stateful set ss was scaled up in order �[1mSTEP�[0m: Scale down will halt with unhealthy stateful pod Jan 11 15:22:03.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4503 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 15:22:03.762: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 15:22:03.762: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 15:22:03.762: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 15:22:03.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4503 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 15:22:03.930: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 15:22:03.930: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 15:22:03.930: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 15:22:03.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4503 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 15:22:04.135: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 15:22:04.135: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 15:22:04.135: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 15:22:04.135: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 15:22:04.139: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jan 11 15:22:14.148: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 15:22:14.148: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 11 15:22:14.148: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 11 15:22:14.162: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999545s Jan 11 15:22:15.167: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995500876s Jan 11 15:22:16.171: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990775267s Jan 11 15:22:17.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98637718s Jan 11 15:22:18.181: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982071541s Jan 11 15:22:19.185: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.976485887s Jan 11 15:22:20.191: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.971378595s Jan 11 15:22:21.196: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.966764985s Jan 11 15:22:22.201: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.961204775s Jan 11 15:22:23.205: INFO: Verifying statefulset ss doesn't scale past 3 for another 956.842348ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4503 Jan 11 15:22:24.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4503 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 15:22:24.363: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 15:22:24.363: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 15:22:24.363: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 15:22:24.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4503 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 15:22:24.519: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 15:22:24.519: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 15:22:24.519: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 15:22:24.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4503 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 15:22:24.668: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 15:22:24.668: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 15:22:24.668: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 15:22:24.669: INFO: Scaling statefulset ss to 0 �[1mSTEP�[0m: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 11 15:22:34.687: INFO: Deleting all statefulset in ns statefulset-4503 Jan 11 15:22:34.690: INFO: Scaling statefulset ss to 0 Jan 11 15:22:34.700: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 15:22:34.703: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:22:34.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-4503" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":58,"skipped":1300,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:22:34.760: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename discovery �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 �[1mSTEP�[0m: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:22:35.231: INFO: Checking APIGroup: apiregistration.k8s.io Jan 11 15:22:35.232: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jan 11 15:22:35.232: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] Jan 11 15:22:35.232: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jan 11 15:22:35.232: INFO: Checking APIGroup: apps Jan 11 15:22:35.234: INFO: PreferredVersion.GroupVersion: apps/v1 Jan 11 15:22:35.234: INFO: Versions found [{apps/v1 v1}] Jan 11 15:22:35.234: INFO: apps/v1 matches apps/v1 Jan 11 15:22:35.234: INFO: Checking APIGroup: events.k8s.io Jan 11 15:22:35.236: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jan 11 15:22:35.236: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jan 11 15:22:35.236: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jan 11 15:22:35.236: INFO: Checking APIGroup: authentication.k8s.io Jan 11 15:22:35.238: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jan 11 15:22:35.238: INFO: Versions found [{authentication.k8s.io/v1 v1}] Jan 11 15:22:35.238: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jan 11 15:22:35.238: INFO: Checking APIGroup: authorization.k8s.io Jan 11 15:22:35.239: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jan 11 15:22:35.239: INFO: Versions found [{authorization.k8s.io/v1 v1}] Jan 11 15:22:35.239: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jan 11 15:22:35.239: INFO: Checking APIGroup: autoscaling Jan 11 15:22:35.240: INFO: PreferredVersion.GroupVersion: autoscaling/v2 Jan 11 15:22:35.241: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jan 11 15:22:35.241: INFO: autoscaling/v2 matches autoscaling/v2 Jan 11 15:22:35.241: INFO: Checking APIGroup: batch Jan 11 15:22:35.242: INFO: PreferredVersion.GroupVersion: batch/v1 Jan 11 15:22:35.242: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jan 11 15:22:35.242: INFO: batch/v1 matches batch/v1 Jan 11 15:22:35.242: INFO: Checking APIGroup: certificates.k8s.io Jan 11 15:22:35.244: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jan 11 15:22:35.244: INFO: Versions found [{certificates.k8s.io/v1 v1}] Jan 11 15:22:35.244: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jan 11 15:22:35.244: INFO: Checking APIGroup: networking.k8s.io Jan 11 15:22:35.245: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jan 11 15:22:35.245: INFO: Versions found [{networking.k8s.io/v1 v1}] Jan 11 15:22:35.245: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jan 11 15:22:35.245: INFO: Checking APIGroup: policy Jan 11 15:22:35.246: INFO: PreferredVersion.GroupVersion: policy/v1 Jan 11 15:22:35.246: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Jan 11 15:22:35.246: INFO: policy/v1 matches policy/v1 Jan 11 15:22:35.246: INFO: Checking APIGroup: rbac.authorization.k8s.io Jan 11 15:22:35.247: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jan 11 15:22:35.247: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] Jan 11 15:22:35.247: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jan 11 15:22:35.247: INFO: Checking APIGroup: storage.k8s.io Jan 11 15:22:35.249: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jan 11 15:22:35.249: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jan 11 15:22:35.249: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jan 11 15:22:35.249: INFO: Checking APIGroup: admissionregistration.k8s.io Jan 11 15:22:35.250: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jan 11 15:22:35.250: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] Jan 11 15:22:35.250: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jan 11 15:22:35.250: INFO: Checking APIGroup: apiextensions.k8s.io Jan 11 15:22:35.252: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jan 11 15:22:35.252: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] Jan 11 15:22:35.252: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jan 11 15:22:35.252: INFO: Checking APIGroup: scheduling.k8s.io Jan 11 15:22:35.254: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jan 11 15:22:35.254: INFO: Versions found [{scheduling.k8s.io/v1 v1}] Jan 11 15:22:35.254: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jan 11 15:22:35.254: INFO: Checking APIGroup: coordination.k8s.io Jan 11 15:22:35.256: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jan 11 15:22:35.256: INFO: Versions found [{coordination.k8s.io/v1 v1}] Jan 11 15:22:35.256: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jan 11 15:22:35.256: INFO: Checking APIGroup: node.k8s.io Jan 11 15:22:35.258: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jan 11 15:22:35.258: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jan 11 15:22:35.259: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jan 11 15:22:35.259: INFO: Checking APIGroup: discovery.k8s.io Jan 11 15:22:35.260: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Jan 11 15:22:35.260: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Jan 11 15:22:35.260: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Jan 11 15:22:35.260: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jan 11 15:22:35.262: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 Jan 11 15:22:35.262: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jan 11 15:22:35.262: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:22:35.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "discovery-4262" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":59,"skipped":1321,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":14,"skipped":305,"failed":8,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:21:44.984: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:21:45.460: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:21:48.502: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API Jan 11 15:21:58.537: INFO: Waiting for webhook configuration to be ready... Jan 11 15:22:08.649: INFO: Waiting for webhook configuration to be ready... Jan 11 15:22:18.750: INFO: Waiting for webhook configuration to be ready... Jan 11 15:22:28.848: INFO: Waiting for webhook configuration to be ready... Jan 11 15:22:38.859: INFO: Waiting for webhook configuration to be ready... Jan 11 15:22:38.860: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002442b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerValidatingWebhookForWebhookConfigurations(0xc000f9af20, {0xc00493e288, 0x14}, 0xc000edd180, 0xd7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1339 +0x7ca k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.10() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:275 +0x73 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005f8340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:22:38.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2757" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2757-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [53.950 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:22:38.860: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002442b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1339 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":14,"skipped":305,"failed":9,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:22:35.363: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service nodeport-test with type=NodePort in namespace services-1671 �[1mSTEP�[0m: creating replication controller nodeport-test in namespace services-1671 I0111 15:22:35.420483 16 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-1671, replica count: 2 I0111 15:22:38.473617 16 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 15:22:38.473: INFO: Creating new exec pod Jan 11 15:22:41.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1671 exec execpodzmj5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 11 15:22:41.776: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Jan 11 15:22:41.776: INFO: stdout: "" Jan 11 15:22:42.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1671 exec execpodzmj5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 11 15:22:42.954: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Jan 11 15:22:42.954: INFO: stdout: "nodeport-test-8xn4v" Jan 11 15:22:42.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1671 exec execpodzmj5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.128.14.90 80' Jan 11 15:22:43.122: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.128.14.90 80\nConnection to 10.128.14.90 80 port [tcp/http] succeeded!\n" Jan 11 15:22:43.122: INFO: stdout: "" Jan 11 15:22:44.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1671 exec execpodzmj5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.128.14.90 80' Jan 11 15:22:44.305: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.128.14.90 80\nConnection to 10.128.14.90 80 port [tcp/http] succeeded!\n" Jan 11 15:22:44.305: INFO: stdout: "nodeport-test-8xn4v" Jan 11 15:22:44.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1671 exec execpodzmj5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 31862' Jan 11 15:22:44.474: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 31862\nConnection to 172.18.0.5 31862 port [tcp/*] succeeded!\n" Jan 11 15:22:44.474: INFO: stdout: "" Jan 11 15:22:45.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1671 exec execpodzmj5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 31862' Jan 11 15:22:45.666: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 31862\nConnection to 172.18.0.5 31862 port [tcp/*] succeeded!\n" Jan 11 15:22:45.666: INFO: stdout: "" Jan 11 15:22:46.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1671 exec execpodzmj5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 31862' Jan 11 15:22:46.646: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 31862\nConnection to 172.18.0.5 31862 port [tcp/*] succeeded!\n" Jan 11 15:22:46.646: INFO: stdout: "" Jan 11 15:22:47.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1671 exec execpodzmj5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 31862' Jan 11 15:22:47.690: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 31862\nConnection to 172.18.0.5 31862 port [tcp/*] succeeded!\n" Jan 11 15:22:47.690: INFO: stdout: "nodeport-test-vn59x" Jan 11 15:22:47.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1671 exec execpodzmj5h -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31862' Jan 11 15:22:47.901: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 31862\nConnection to 172.18.0.4 31862 port [tcp/*] succeeded!\n" Jan 11 15:22:47.901: INFO: stdout: "nodeport-test-vn59x" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:22:47.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1671" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":60,"skipped":1370,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:22:33.891: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Jan 11 15:22:34.534: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 15:22:34.552: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 15:22:37.588: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be denied by the webhook �[1mSTEP�[0m: create a pod that causes the webhook to hang �[1mSTEP�[0m: create a configmap that should be denied by the webhook �[1mSTEP�[0m: create a configmap that should be admitted by the webhook �[1mSTEP�[0m: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: create a namespace that bypass the webhook �[1mSTEP�[0m: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:22:48.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5979" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5979-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":57,"skipped":1309,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:22:48.966: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:22:49.021: INFO: Creating deployment "test-recreate-deployment" Jan 11 15:22:49.057: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 11 15:22:49.095: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 11 15:22:51.104: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 11 15:22:51.108: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 11 15:22:51.118: INFO: Updating deployment test-recreate-deployment Jan 11 15:22:51.118: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 11 15:22:51.236: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-998 6ee8105a-9441-4ec2-9866-5551727bff4d 12516 2 2023-01-11 15:22:49 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-11 15:22:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:22:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004e9c4b8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-11 15:22:51 +0000 UTC,LastTransitionTime:2023-01-11 15:22:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5b99bd5487" is progressing.,LastUpdateTime:2023-01-11 15:22:51 +0000 UTC,LastTransitionTime:2023-01-11 15:22:49 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 11 15:22:51.241: INFO: New ReplicaSet "test-recreate-deployment-5b99bd5487" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5b99bd5487 deployment-998 1f1b4017-0e59-498c-a425-1ede1bbfc6db 12513 1 2023-01-11 15:22:51 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 6ee8105a-9441-4ec2-9866-5551727bff4d 0xc004e9cb77 0xc004e9cb78}] [] [{kube-controller-manager Update apps/v1 2023-01-11 15:22:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ee8105a-9441-4ec2-9866-5551727bff4d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:22:51 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5b99bd5487,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004e9cc38 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 15:22:51.241: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 11 15:22:51.242: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-594f666cd9 deployment-998 54ec1d4f-3749-4554-9b0d-1741a2e096ff 12504 2 2023-01-11 15:22:49 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:594f666cd9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 6ee8105a-9441-4ec2-9866-5551727bff4d 0xc004e9ca57 0xc004e9ca58}] [] [{kube-controller-manager Update apps/v1 2023-01-11 15:22:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ee8105a-9441-4ec2-9866-5551727bff4d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:22:51 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 594f666cd9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:594f666cd9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004e9cb08 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 15:22:51.248: INFO: Pod "test-recreate-deployment-5b99bd5487-rt6fx" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5b99bd5487-rt6fx test-recreate-deployment-5b99bd5487- deployment-998 aae770c1-1a66-4a76-90d1-04cc00ef90c7 12515 0 2023-01-11 15:22:51 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5b99bd5487 1f1b4017-0e59-498c-a425-1ede1bbfc6db 0xc0056173c7 0xc0056173c8}] [] [{kube-controller-manager Update v1 2023-01-11 15:22:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f1b4017-0e59-498c-a425-1ede1bbfc6db\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-11 15:22:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8t4g9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8t4g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8jx80k-worker-r73y4c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:22:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:22:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:22:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:22:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-11 15:22:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:22:51.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-998" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":58,"skipped":1317,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:22:47.963: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-a234a8b1-aa41-4f31-9e87-369cfd7e88c9 �[1mSTEP�[0m: Creating the pod Jan 11 15:22:48.013: INFO: The status of Pod pod-projected-configmaps-d987ac2d-7482-41ae-95a4-90a9b5e9f047 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:22:50.019: INFO: The status of Pod pod-projected-configmaps-d987ac2d-7482-41ae-95a4-90a9b5e9f047 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-a234a8b1-aa41-4f31-9e87-369cfd7e88c9 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:22:52.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8755" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":1390,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:22:51.291: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-680c88fa-2945-47ab-832f-d7c7de90e1ae �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 15:22:51.341: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed3bc2ea-8063-41b2-9cb5-52674b7e82db" in namespace "configmap-3935" to be "Succeeded or Failed" Jan 11 15:22:51.346: INFO: Pod "pod-configmaps-ed3bc2ea-8063-41b2-9cb5-52674b7e82db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250921ms Jan 11 15:22:53.350: INFO: Pod "pod-configmaps-ed3bc2ea-8063-41b2-9cb5-52674b7e82db": Phase="Running", Reason="", readiness=false. Elapsed: 2.008982484s Jan 11 15:22:55.357: INFO: Pod "pod-configmaps-ed3bc2ea-8063-41b2-9cb5-52674b7e82db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015401801s �[1mSTEP�[0m: Saw pod success Jan 11 15:22:55.357: INFO: Pod "pod-configmaps-ed3bc2ea-8063-41b2-9cb5-52674b7e82db" satisfied condition "Succeeded or Failed" Jan 11 15:22:55.361: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4 pod pod-configmaps-ed3bc2ea-8063-41b2-9cb5-52674b7e82db container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:22:55.381: INFO: Waiting for pod pod-configmaps-ed3bc2ea-8063-41b2-9cb5-52674b7e82db to disappear Jan 11 15:22:55.385: INFO: Pod pod-configmaps-ed3bc2ea-8063-41b2-9cb5-52674b7e82db no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:22:55.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3935" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1329,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:22:55.440: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:22:55.465: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption-2 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: listing a collection of PDBs across all namespaces �[1mSTEP�[0m: listing a collection of PDBs in namespace disruption-717 �[1mSTEP�[0m: deleting a collection of PDBs �[1mSTEP�[0m: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:01.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2-5554" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:01.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-717" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":60,"skipped":1345,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:23:01.598: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ServiceAccount �[1mSTEP�[0m: watching for the ServiceAccount to be added �[1mSTEP�[0m: patching the ServiceAccount �[1mSTEP�[0m: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) �[1mSTEP�[0m: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:01.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-3861" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":61,"skipped":1346,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:23:01.774: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-a8833fa0-deb7-4580-b57b-8195e334f2a0 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 15:23:01.824: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9c9a1fd9-53b4-4ed1-89c8-33526746c623" in namespace "projected-5144" to be "Succeeded or Failed" Jan 11 15:23:01.828: INFO: Pod "pod-projected-configmaps-9c9a1fd9-53b4-4ed1-89c8-33526746c623": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393902ms Jan 11 15:23:03.834: INFO: Pod "pod-projected-configmaps-9c9a1fd9-53b4-4ed1-89c8-33526746c623": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009908201s Jan 11 15:23:05.840: INFO: Pod "pod-projected-configmaps-9c9a1fd9-53b4-4ed1-89c8-33526746c623": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016684729s �[1mSTEP�[0m: Saw pod success Jan 11 15:23:05.841: INFO: Pod "pod-projected-configmaps-9c9a1fd9-53b4-4ed1-89c8-33526746c623" satisfied condition "Succeeded or Failed" Jan 11 15:23:05.847: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4 pod pod-projected-configmaps-9c9a1fd9-53b4-4ed1-89c8-33526746c623 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:23:05.876: INFO: Waiting for pod pod-projected-configmaps-9c9a1fd9-53b4-4ed1-89c8-33526746c623 to disappear Jan 11 15:23:05.882: INFO: Pod pod-projected-configmaps-9c9a1fd9-53b4-4ed1-89c8-33526746c623 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:05.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5144" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1365,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:23:05.983: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Jan 11 15:23:06.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5277 create -f -' Jan 11 15:23:06.582: INFO: stderr: "" Jan 11 15:23:06.583: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 11 15:23:07.590: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 15:23:07.590: INFO: Found 0 / 1 Jan 11 15:23:08.600: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 15:23:08.600: INFO: Found 1 / 1 Jan 11 15:23:08.600: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Jan 11 15:23:08.608: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 15:23:08.608: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 11 15:23:08.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5277 patch pod agnhost-primary-mrlhw -p {"metadata":{"annotations":{"x":"y"}}}' Jan 11 15:23:08.871: INFO: stderr: "" Jan 11 15:23:08.871: INFO: stdout: "pod/agnhost-primary-mrlhw patched\n" �[1mSTEP�[0m: checking annotations Jan 11 15:23:08.878: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 15:23:08.878: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:08.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5277" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":63,"skipped":1386,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:23:08.915: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:09.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-1558" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":64,"skipped":1389,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:23:09.204: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename e2e-kubelet-etc-hosts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Setting up the test �[1mSTEP�[0m: Creating hostNetwork=false pod Jan 11 15:23:09.274: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:23:11.298: INFO: The status of Pod test-pod is Running (Ready = true) �[1mSTEP�[0m: Creating hostNetwork=true pod Jan 11 15:23:11.330: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:23:13.339: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:23:15.338: INFO: The status of Pod test-host-network-pod is Running (Ready = true) �[1mSTEP�[0m: Running the test �[1mSTEP�[0m: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 11 15:23:15.346: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1704 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:23:15.346: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:23:15.347: INFO: ExecWithOptions: Clientset creation Jan 11 15:23:15.347: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1704/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:23:15.524: INFO: Exec stderr: "" Jan 11 15:23:15.524: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1704 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:23:15.524: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:23:15.526: INFO: ExecWithOptions: Clientset creation Jan 11 15:23:15.526: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1704/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:23:15.689: INFO: Exec stderr: "" Jan 11 15:23:15.689: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1704 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:23:15.689: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:23:15.690: INFO: ExecWithOptions: Clientset creation Jan 11 15:23:15.690: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1704/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:23:15.850: INFO: Exec stderr: "" Jan 11 15:23:15.850: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1704 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:23:15.850: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:23:15.852: INFO: ExecWithOptions: Clientset creation Jan 11 15:23:15.852: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1704/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:23:16.012: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 11 15:23:16.012: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1704 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:23:16.012: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:23:16.015: INFO: ExecWithOptions: Clientset creation Jan 11 15:23:16.015: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1704/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:23:16.178: INFO: Exec stderr: "" Jan 11 15:23:16.178: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1704 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:23:16.178: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:23:16.180: INFO: ExecWithOptions: Clientset creation Jan 11 15:23:16.180: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1704/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:23:16.347: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 11 15:23:16.347: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1704 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:23:16.347: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:23:16.348: INFO: ExecWithOptions: Clientset creation Jan 11 15:23:16.348: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1704/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:23:16.497: INFO: Exec stderr: "" Jan 11 15:23:16.497: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1704 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:23:16.497: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:23:16.499: INFO: ExecWithOptions: Clientset creation Jan 11 15:23:16.499: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1704/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:23:16.633: INFO: Exec stderr: "" Jan 11 15:23:16.633: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1704 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:23:16.633: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:23:16.634: INFO: ExecWithOptions: Clientset creation Jan 11 15:23:16.634: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1704/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:23:16.773: INFO: Exec stderr: "" Jan 11 15:23:16.773: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1704 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 15:23:16.774: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 15:23:16.775: INFO: ExecWithOptions: Clientset creation Jan 11 15:23:16.775: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-1704/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 11 15:23:16.916: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:16.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "e2e-kubelet-etc-hosts-1704" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":1435,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:23:16.966: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up watch �[1mSTEP�[0m: submitting the pod to kubernetes Jan 11 15:23:17.018: INFO: observed the pod list �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: verifying pod creation was observed �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:21.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-2927" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1442,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:23:21.690: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-8139 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating statefulset ss in namespace statefulset-8139 Jan 11 15:23:21.750: INFO: Found 0 stateful pods, waiting for 1 Jan 11 15:23:31.758: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the statefulset Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource �[1mSTEP�[0m: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 11 15:23:31.815: INFO: Deleting all statefulset in ns statefulset-8139 Jan 11 15:23:31.823: INFO: Scaling statefulset ss to 0 Jan 11 15:23:41.892: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 15:23:41.898: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:41.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8139" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":67,"skipped":1448,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:23:41.960: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:23:42.023: INFO: The status of Pod pod-secrets-82ab78a5-b183-4fa0-8755-620509813c3c is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:23:44.031: INFO: The status of Pod pod-secrets-82ab78a5-b183-4fa0-8755-620509813c3c is Running (Ready = true) �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:44.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-9597" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":68,"skipped":1452,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:23:44.132: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:23:44.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7742 create -f -' Jan 11 15:23:44.617: INFO: stderr: "" Jan 11 15:23:44.618: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 11 15:23:44.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7742 create -f -' Jan 11 15:23:45.101: INFO: stderr: "" Jan 11 15:23:45.101: INFO: stdout: "service/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 11 15:23:46.109: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 15:23:46.109: INFO: Found 0 / 1 Jan 11 15:23:47.106: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 15:23:47.106: INFO: Found 1 / 1 Jan 11 15:23:47.107: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 11 15:23:47.111: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 15:23:47.111: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 11 15:23:47.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7742 describe pod agnhost-primary-sd9wl' Jan 11 15:23:47.320: INFO: stderr: "" Jan 11 15:23:47.320: INFO: stdout: "Name: agnhost-primary-sd9wl\nNamespace: kubectl-7742\nPriority: 0\nNode: k8s-upgrade-and-conformance-8jx80k-worker-b15lfw/172.18.0.6\nStart Time: Wed, 11 Jan 2023 15:23:44 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.80\nIPs:\n IP: 192.168.2.80\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://ed097a3bf231c737e92c9d556cd52b4c3b158d6b3346d4491525fecc90bf0903\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 11 Jan 2023 15:23:45 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lmtm7 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-lmtm7:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-7742/agnhost-primary-sd9wl to k8s-upgrade-and-conformance-8jx80k-worker-b15lfw\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Jan 11 15:23:47.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7742 describe rc agnhost-primary' Jan 11 15:23:47.529: INFO: stderr: "" Jan 11 15:23:47.529: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7742\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-sd9wl\n" Jan 11 15:23:47.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7742 describe service agnhost-primary' Jan 11 15:23:47.701: INFO: stderr: "" Jan 11 15:23:47.701: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7742\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.134.35.35\nIPs: 10.134.35.35\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.2.80:6379\nSession Affinity: None\nEvents: <none>\n" Jan 11 15:23:47.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7742 describe node k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8' Jan 11 15:23:47.939: INFO: stderr: "" Jan 11 15:23:47.939: INFO: stdout: "Name: k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-8jx80k\n cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-8m9snr\n cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-8jx80k-jsr69\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 11 Jan 2023 15:00:45 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8\n AcquireTime: <unset>\n RenewTime: Wed, 11 Jan 2023 15:23:45 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 11 Jan 2023 15:22:44 +0000 Wed, 11 Jan 2023 15:00:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 11 Jan 2023 15:22:44 +0000 Wed, 11 Jan 2023 15:00:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 11 Jan 2023 15:22:44 +0000 Wed, 11 Jan 2023 15:00:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 11 Jan 2023 15:22:44 +0000 Wed, 11 Jan 2023 15:02:18 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.9\n Hostname: k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8\nCapacity:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nAllocatable:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nSystem Info:\n Machine ID: 8d3eec4153584a8493b9901495d8b10c\n System UUID: 5ee866bf-cd64-4b19-a927-9a92aa7b8a4e\n Boot ID: 3f243769-59bc-4a54-8cb7-3ff551b179a9\n Kernel Version: 5.4.0-1081-gke\n OS Image: Ubuntu 21.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.4\n Kubelet Version: v1.23.15\n Kube-Proxy Version: v1.23.15\nPodCIDR: 192.168.5.0/24\nPodCIDRs: 192.168.5.0/24\nProviderID: docker:////k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 22m\n kube-system kindnet-j6258 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 23m\n kube-system kube-apiserver-k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8 250m (3%) 0 (0%) 0 (0%) 0 (0%) 23m\n kube-system kube-controller-manager-k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8 200m (2%) 0 (0%) 0 (0%) 0 (0%) 23m\n kube-system kube-proxy-gtd5z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m\n kube-system kube-scheduler-k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8 100m (1%) 0 (0%) 0 (0%) 0 (0%) 23m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (9%) 100m (1%)\n memory 150Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 22m kube-proxy \n Normal Starting 21m kube-proxy \n Normal Starting 23m kubelet Starting kubelet.\n Warning InvalidDiskCapacity 23m kubelet invalid capacity 0 on image filesystem\n Normal NodeHasSufficientMemory 23m (x2 over 23m) kubelet Node k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8 status is now: NodeHasSufficientMemory\n Normal NodeHasSufficientPID 23m (x2 over 23m) kubelet Node k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8 status is now: NodeHasSufficientPID\n Warning CheckLimitsForResolvConf 23m kubelet Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n Normal NodeAllocatableEnforced 23m kubelet Updated Node Allocatable limit across pods\n Normal NodeHasNoDiskPressure 23m (x2 over 23m) kubelet Node k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8 status is now: NodeHasNoDiskPressure\n Normal NodeReady 21m kubelet Node k8s-upgrade-and-conformance-8jx80k-jsr69-d69v8 status is now: NodeReady\n" Jan 11 15:23:47.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7742 describe namespace kubectl-7742' Jan 11 15:23:48.126: INFO: stderr: "" Jan 11 15:23:48.126: INFO: stdout: "Name: kubectl-7742\nLabels: e2e-framework=kubectl\n e2e-run=6b82e259-7219-4486-8e25-52bff6c83755\n kubernetes.io/metadata.name=kubectl-7742\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:23:48.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7742" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":69,"skipped":1464,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":89,"failed":2,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:19:03.022: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 11 15:19:03.072: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:05.078: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 11 15:19:05.093: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:07.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:09.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:11.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:13.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:15.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:17.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:19.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:21.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:23.100: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:25.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:27.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:29.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:31.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:33.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:35.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:37.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:39.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:41.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:43.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:45.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:47.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:49.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:51.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:53.096: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:55.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:57.101: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:19:59.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:01.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:03.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:05.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:07.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:09.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:11.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:13.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:15.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:17.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:19.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:21.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:23.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:25.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:27.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:29.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:31.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:33.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:35.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:37.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:39.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:41.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:43.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:45.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:47.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:49.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:51.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:53.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:55.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:57.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:20:59.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:21:01.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:21:03.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:21:05.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:21:07.097: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:21:09.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:21:11.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:21:13.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:21:15.099: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:21:17.098: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:21:19.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:21.102: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:23.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:25.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:27.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:29.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:31.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:33.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:35.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:37.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:39.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:41.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:43.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:45.102: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:47.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:49.106: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:51.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:53.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:55.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:57.101: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:21:59.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:01.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:03.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:05.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:07.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:09.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:11.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:13.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:15.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:17.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:19.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:21.101: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:23.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:25.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:27.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:29.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:31.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:33.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:35.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:37.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:39.097: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:41.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:43.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:45.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:47.102: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:49.123: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:51.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:53.103: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:55.098: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:57.097: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:22:59.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:01.104: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:03.102: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:05.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:07.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:09.111: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:11.106: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:13.101: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:15.102: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:17.102: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:19.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:21.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:23.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:25.102: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:27.101: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:29.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:31.103: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:33.101: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:35.104: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:37.101: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:39.101: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:41.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:43.102: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:45.143: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:47.099: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:49.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:51.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:53.102: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:55.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:57.101: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:23:59.102: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:24:01.106: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:24:03.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:24:05.100: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:24:05.106: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 11 15:24:05.107: FAIL: Unexpected error: <*errors.errorString | 0xc0002bc2b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc00397fce0, 0x7fdc8b3085b8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.2(0xc001030c00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:72 +0x73 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:105 +0x32b k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000186d00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:24:05.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-7299" for this suite. �[91m�[1m• Failure [302.107 seconds]�[0m [sig-node] Container Lifecycle Hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m when create a pod with lifecycle hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44�[0m �[91m�[1mshould execute poststart exec hook properly [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 11 15:24:05.107: Unexpected error: <*errors.errorString | 0xc0002bc2b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":89,"failed":3,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:22:52.166: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-fe83a36c-3627-46ca-b49d-79ec9d9bf292 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-cf722dde-74f4-406a-9050-0fc9d9b32502 �[1mSTEP�[0m: Creating the pod Jan 11 15:22:52.234: INFO: The status of Pod pod-configmaps-fb6939d7-2ad6-48ae-9c5b-5cc4bbe95280 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:22:54.242: INFO: The status of Pod pod-configmaps-fb6939d7-2ad6-48ae-9c5b-5cc4bbe95280 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:22:56.247: INFO: The status of Pod pod-configmaps-fb6939d7-2ad6-48ae-9c5b-5cc4bbe95280 is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-fe83a36c-3627-46ca-b49d-79ec9d9bf292 �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-cf722dde-74f4-406a-9050-0fc9d9b32502 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-b86ef584-90e4-4b08-a277-fcc9bb127f99 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:24:06.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-8636" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1421,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:24:05.195: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-dec80e0c-4d83-4ae6-b654-e92c34b6e570 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 15:24:05.260: INFO: Waiting up to 5m0s for pod "pod-configmaps-e1c68574-4516-4e7a-9939-709672edb574" in namespace "configmap-888" to be "Succeeded or Failed" Jan 11 15:24:05.266: INFO: Pod "pod-configmaps-e1c68574-4516-4e7a-9939-709672edb574": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339686ms Jan 11 15:24:07.274: INFO: Pod "pod-configmaps-e1c68574-4516-4e7a-9939-709672edb574": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013995682s Jan 11 15:24:09.281: INFO: Pod "pod-configmaps-e1c68574-4516-4e7a-9939-709672edb574": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02104615s �[1mSTEP�[0m: Saw pod success Jan 11 15:24:09.281: INFO: Pod "pod-configmaps-e1c68574-4516-4e7a-9939-709672edb574" satisfied condition "Succeeded or Failed" Jan 11 15:24:09.286: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod pod-configmaps-e1c68574-4516-4e7a-9939-709672edb574 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:24:09.326: INFO: Waiting for pod pod-configmaps-e1c68574-4516-4e7a-9939-709672edb574 to disappear Jan 11 15:24:09.333: INFO: Pod pod-configmaps-e1c68574-4516-4e7a-9939-709672edb574 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:24:09.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-888" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":107,"failed":3,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:24:09.468: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Pod with a static label �[1mSTEP�[0m: watching for Pod to be ready Jan 11 15:24:09.571: INFO: observed Pod pod-test in namespace pods-8800 in phase Pending with labels: map[test-pod-static:true] & conditions [] Jan 11 15:24:09.579: INFO: observed Pod pod-test in namespace pods-8800 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:24:09 +0000 UTC }] Jan 11 15:24:09.600: INFO: observed Pod pod-test in namespace pods-8800 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:24:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:24:09 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:24:09 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:24:09 +0000 UTC }] Jan 11 15:24:11.493: INFO: Found Pod pod-test in namespace pods-8800 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:24:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:24:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:24:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 15:24:09 +0000 UTC }] �[1mSTEP�[0m: patching the Pod with a new Label and updated data Jan 11 15:24:11.518: INFO: observed event type ADDED �[1mSTEP�[0m: getting the Pod and ensuring that it's patched �[1mSTEP�[0m: replacing the Pod's status Ready condition to False �[1mSTEP�[0m: check the Pod again to ensure its Ready conditions are False �[1mSTEP�[0m: deleting the Pod via a Collection with a LabelSelector �[1mSTEP�[0m: watching for the Pod to be deleted Jan 11 15:24:11.562: INFO: observed event type ADDED Jan 11 15:24:11.562: INFO: observed event type MODIFIED Jan 11 15:24:11.562: INFO: observed event type MODIFIED Jan 11 15:24:11.562: INFO: observed event type MODIFIED Jan 11 15:24:11.562: INFO: observed event type MODIFIED Jan 11 15:24:11.562: INFO: observed event type MODIFIED Jan 11 15:24:11.562: INFO: observed event type MODIFIED Jan 11 15:24:13.514: INFO: observed event type MODIFIED Jan 11 15:24:14.511: INFO: observed event type MODIFIED Jan 11 15:24:14.524: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:24:14.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8800" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":6,"skipped":140,"failed":3,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:23:48.250: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:23:48.293: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with known and required properties Jan 11 15:23:51.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 --namespace=crd-publish-openapi-7218 create -f -' Jan 11 15:23:54.133: INFO: stderr: "" Jan 11 15:23:54.133: INFO: stdout: "e2e-test-crd-publish-openapi-1863-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 11 15:23:54.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 --namespace=crd-publish-openapi-7218 delete e2e-test-crd-publish-openapi-1863-crds test-foo' Jan 11 15:23:54.299: INFO: stderr: "" Jan 11 15:23:54.300: INFO: stdout: "e2e-test-crd-publish-openapi-1863-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 11 15:23:54.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 --namespace=crd-publish-openapi-7218 apply -f -' Jan 11 15:23:54.828: INFO: stderr: "" Jan 11 15:23:54.828: INFO: stdout: "e2e-test-crd-publish-openapi-1863-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 11 15:23:54.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 --namespace=crd-publish-openapi-7218 delete e2e-test-crd-publish-openapi-1863-crds test-foo' Jan 11 15:23:54.985: INFO: stderr: "" Jan 11 15:23:54.985: INFO: stdout: "e2e-test-crd-publish-openapi-1863-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with value outside defined enum values Jan 11 15:23:54.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 --namespace=crd-publish-openapi-7218 create -f -' Jan 11 15:23:55.458: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 11 15:23:55.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 --namespace=crd-publish-openapi-7218 create -f -' Jan 11 15:23:55.933: INFO: rc: 1 Jan 11 15:23:55.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 --namespace=crd-publish-openapi-7218 apply -f -' Jan 11 15:23:56.395: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request without required properties Jan 11 15:23:56.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 --namespace=crd-publish-openapi-7218 create -f -' Jan 11 15:23:56.835: INFO: rc: 1 Jan 11 15:23:56.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 --namespace=crd-publish-openapi-7218 apply -f -' Jan 11 15:23:57.295: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Jan 11 15:23:57.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 explain e2e-test-crd-publish-openapi-1863-crds' Jan 11 15:23:57.880: INFO: stderr: "" Jan 11 15:23:57.880: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1863-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Jan 11 15:23:57.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 explain e2e-test-crd-publish-openapi-1863-crds.metadata' Jan 11 15:23:58.338: INFO: stderr: "" Jan 11 15:23:58.338: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1863-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 11 15:23:58.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 explain e2e-test-crd-publish-openapi-1863-crds.spec' Jan 11 15:23:58.784: INFO: stderr: "" Jan 11 15:23:58.785: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1863-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 11 15:23:58.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 explain e2e-test-crd-publish-openapi-1863-crds.spec.bars' Jan 11 15:24:14.949: INFO: stderr: "" Jan 11 15:24:14.950: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1863-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Jan 11 15:24:14.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7218 explain e2e-test-crd-publish-openapi-1863-crds.spec.bars2' Jan 11 15:24:15.396: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:24:20.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7218" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":70,"skipped":1499,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:24:14.816: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jan 11 15:24:14.878: INFO: Waiting up to 5m0s for pod "security-context-b4133d1f-899c-41f5-8e05-c4b9300b9243" in namespace "security-context-7104" to be "Succeeded or Failed" Jan 11 15:24:14.896: INFO: Pod "security-context-b4133d1f-899c-41f5-8e05-c4b9300b9243": Phase="Pending", Reason="", readiness=false. Elapsed: 17.056029ms Jan 11 15:24:16.913: INFO: Pod "security-context-b4133d1f-899c-41f5-8e05-c4b9300b9243": Phase="Running", Reason="", readiness=false. Elapsed: 2.033976641s Jan 11 15:24:18.925: INFO: Pod "security-context-b4133d1f-899c-41f5-8e05-c4b9300b9243": Phase="Running", Reason="", readiness=false. Elapsed: 4.046716868s Jan 11 15:24:20.932: INFO: Pod "security-context-b4133d1f-899c-41f5-8e05-c4b9300b9243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053262243s �[1mSTEP�[0m: Saw pod success Jan 11 15:24:20.932: INFO: Pod "security-context-b4133d1f-899c-41f5-8e05-c4b9300b9243" satisfied condition "Succeeded or Failed" Jan 11 15:24:20.938: INFO: Trying to get logs from node k8s-upgrade-and-conformance-8jx80k-worker-b15lfw pod security-context-b4133d1f-899c-41f5-8e05-c4b9300b9243 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 15:24:20.964: INFO: Waiting for pod security-context-b4133d1f-899c-41f5-8e05-c4b9300b9243 to disappear Jan 11 15:24:20.970: INFO: Pod security-context-b4133d1f-899c-41f5-8e05-c4b9300b9243 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:24:20.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-7104" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":207,"failed":3,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:24:20.651: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:24:20.716: INFO: The status of Pod busybox-readonly-fs47d946db-8267-41a9-aa3b-70443b72ff90 is Pending, waiting for it to be Running (with Ready = true) Jan 11 15:24:22.723: INFO: The status of Pod busybox-readonly-fs47d946db-8267-41a9-aa3b-70443b72ff90 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:24:22.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-519" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1508,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:24:22.776: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingressclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:186 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 11 15:24:22.858: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 11 15:24:22.877: INFO: waiting for watch events with expected annotations Jan 11 15:24:22.877: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:24:22.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingressclass-2537" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":72,"skipped":1512,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:24:06.899: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:24:06.964: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 11 15:24:11.974: INFO: Pod name rollover-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 11 15:24:11.974: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 11 15:24:13.981: INFO: Creating deployment "test-rollover-deployment" Jan 11 15:24:13.992: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 11 15:24:16.002: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 11 15:24:16.014: INFO: Ensure that both replica sets have 1 created replica Jan 11 15:24:16.023: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 11 15:24:16.038: INFO: Updating deployment test-rollover-deployment Jan 11 15:24:16.039: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 11 15:24:18.052: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 11 15:24:18.065: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 11 15:24:18.079: INFO: all replica sets need to contain the pod-template-hash label Jan 11 15:24:18.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 24, 17, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 15:24:20.091: INFO: all replica sets need to contain the pod-template-hash label Jan 11 15:24:20.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 24, 17, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 15:24:22.097: INFO: all replica sets need to contain the pod-template-hash label Jan 11 15:24:22.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 24, 17, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 15:24:24.125: INFO: all replica sets need to contain the pod-template-hash label Jan 11 15:24:24.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 24, 17, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 15:24:26.093: INFO: all replica sets need to contain the pod-template-hash label Jan 11 15:24:26.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 11, 15, 24, 17, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 11, 15, 24, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-77db6f9f48\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 15:24:28.108: INFO: Jan 11 15:24:28.109: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 11 15:24:28.219: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2 e3e22c5f-4404-4548-b706-bb5da389e0a6 13699 2 2023-01-11 15:24:13 +0000 UTC <nil> <nil> map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-11 15:24:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0050841f8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-11 15:24:14 +0000 UTC,LastTransitionTime:2023-01-11 15:24:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-77db6f9f48" has successfully progressed.,LastUpdateTime:2023-01-11 15:24:27 +0000 UTC,LastTransitionTime:2023-01-11 15:24:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 11 15:24:28.268: INFO: New ReplicaSet "test-rollover-deployment-77db6f9f48" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-77db6f9f48 deployment-2 47ad6e2f-e620-4b1d-8e15-c73e102a474f 13688 2 2023-01-11 15:24:16 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:77db6f9f48] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e3e22c5f-4404-4548-b706-bb5da389e0a6 0xc004407fb7 0xc004407fb8}] [] [{kube-controller-manager Update apps/v1 2023-01-11 15:24:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3e22c5f-4404-4548-b706-bb5da389e0a6\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 77db6f9f48,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:77db6f9f48] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00449c1a8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 15:24:28.268: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 11 15:24:28.268: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2 ec7e3daa-9995-4e8e-ba73-577384a6ade4 13698 2 2023-01-11 15:24:06 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e3e22c5f-4404-4548-b706-bb5da389e0a6 0xc004407e87 0xc004407e88}] [] [{e2e.test Update apps/v1 2023-01-11 15:24:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3e22c5f-4404-4548-b706-bb5da389e0a6\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004407f48 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 15:24:28.268: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-784bc44b77 deployment-2 eb0f7e09-58f3-4072-b9f6-e2f9e24988f9 13390 2 2023-01-11 15:24:14 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e3e22c5f-4404-4548-b706-bb5da389e0a6 0xc00449c297 0xc00449c298}] [] [{kube-controller-manager Update apps/v1 2023-01-11 15:24:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3e22c5f-4404-4548-b706-bb5da389e0a6\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:24:16 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 784bc44b77,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00449c7c8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 15:24:28.296: INFO: Pod "test-rollover-deployment-77db6f9f48-xkcq4" is available: &Pod{ObjectMeta:{test-rollover-deployment-77db6f9f48-xkcq4 test-rollover-deployment-77db6f9f48- deployment-2 7960b207-55b3-4de7-a11c-31b888a376f3 13405 0 2023-01-11 15:24:16 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:77db6f9f48] map[] [{apps/v1 ReplicaSet test-rollover-deployment-77db6f9f48 47ad6e2f-e620-4b1d-8e15-c73e102a474f 0xc005084587 0xc005084588}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"47ad6e2f-e620-4b1d-8e15-c73e102a474f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-11 15:24:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dpjvf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dpjvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8jx80k-worker-r73y4c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.71,StartTime:2023-01-11 15:24:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 15:24:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://bd9b1e7ee65e88987bc647b8788d8ffa1070251b4a46af4c2979ad34498c266d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 11 15:24:28.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-2" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":63,"skipped":1430,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 15:24:22.985: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 11 15:24:23.024: INFO: Creating deployment "webserver-deployment" Jan 11 15:24:23.035: INFO: Waiting for observed generation 1 Jan 11 15:24:25.052: INFO: Waiting for all required pods to come up Jan 11 15:24:25.070: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Jan 11 15:24:27.098: INFO: Waiting for deployment "webserver-deployment" to complete Jan 11 15:24:27.107: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 11 15:24:27.124: INFO: Updating deployment webserver-deployment Jan 11 15:24:27.124: INFO: Waiting for observed generation 2 Jan 11 15:24:29.140: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 11 15:24:29.155: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 11 15:24:29.161: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 11 15:24:29.184: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 11 15:24:29.185: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 11 15:24:29.190: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 11 15:24:29.210: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 11 15:24:29.210: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 11 15:24:29.229: INFO: Updating deployment webserver-deployment Jan 11 15:24:29.230: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 11 15:24:29.247: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 11 15:24:29.259: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 11 15:24:29.295: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8885 16fc2fe3-c10e-45dc-b879-ec6b2b6bb2f9 13773 3 2023-01-11 15:24:23 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-11 15:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055cf4e8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-566f96c878" is progressing.,LastUpdateTime:2023-01-11 15:24:27 +0000 UTC,LastTransitionTime:2023-01-11 15:24:23 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-11 15:24:29 +0000 UTC,LastTransitionTime:2023-01-11 15:24:29 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 11 15:24:29.343: INFO: New ReplicaSet "webserver-deployment-566f96c878" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-566f96c878 deployment-8885 78645042-2f54-4feb-ae04-9afb3b667b6f 13765 3 2023-01-11 15:24:27 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 16fc2fe3-c10e-45dc-b879-ec6b2b6bb2f9 0xc0055fc597 0xc0055fc598}] [] [{kube-controller-manager Update apps/v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16fc2fe3-c10e-45dc-b879-ec6b2b6bb2f9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 566f96c878,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055fc648 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 15:24:29.343: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 11 15:24:29.343: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-8885 767edf23-7b8f-4f7a-b9be-498c0374ad20 13762 3 2023-01-11 15:24:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 16fc2fe3-c10e-45dc-b879-ec6b2b6bb2f9 0xc0055fc6a7 0xc0055fc6a8}] [] [{kube-controller-manager Update apps/v1 2023-01-11 15:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"16fc2fe3-c10e-45dc-b879-ec6b2b6bb2f9\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-11 15:24:24 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055fc738 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 11 15:24:29.388: INFO: Pod "webserver-deployment-566f96c878-dgwlh" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-dgwlh webserver-deployment-566f96c878- deployment-8885 42e6459a-384c-4ed7-878e-b3b84a2bb234 13796 0 2023-01-11 15:24:29 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc0055fcc87 0xc0055fcc88}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9r9q8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9r9q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.388: INFO: Pod "webserver-deployment-566f96c878-hp28d" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-hp28d webserver-deployment-566f96c878- deployment-8885 19464a46-332c-4db7-9c22-b76bbf14f8c5 13794 0 2023-01-11 15:24:29 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc0055fcde7 0xc0055fcde8}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rzkr9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rzkr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.389: INFO: Pod "webserver-deployment-566f96c878-jn7kn" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-jn7kn webserver-deployment-566f96c878- deployment-8885 355e46e4-8b4d-40ed-b85d-4cdfa406f4ce 13760 0 2023-01-11 15:24:27 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc0055fcfe0 0xc0055fcfe1}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-11 15:24:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.87\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gf79w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gf79w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8jx80k-worker-b15lfw,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.87,StartTime:2023-01-11 15:24:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.87,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.389: INFO: Pod "webserver-deployment-566f96c878-jpk5m" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-jpk5m webserver-deployment-566f96c878- deployment-8885 7e5d3579-cb35-4280-9cba-7ae353dab95c 13799 0 2023-01-11 15:24:29 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc0055fd240 0xc0055fd241}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kn9mh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kn9mh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.389: INFO: Pod "webserver-deployment-566f96c878-jsgbb" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-jsgbb webserver-deployment-566f96c878- deployment-8885 d3b4b8f5-8f5e-4736-b380-a71711e30dfb 13747 0 2023-01-11 15:24:27 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc0055fd3c0 0xc0055fd3c1}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-11 15:24:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.75\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jpv29,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jpv29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8jx80k-worker-r73y4c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.75,StartTime:2023-01-11 15:24:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.75,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.390: INFO: Pod "webserver-deployment-566f96c878-n6z2n" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-n6z2n webserver-deployment-566f96c878- deployment-8885 b2031f56-030d-46e1-b4da-379312acd658 13742 0 2023-01-11 15:24:27 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc0055fd5d0 0xc0055fd5d1}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-11 15:24:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.96\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mltgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mltgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-57pm4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.96,StartTime:2023-01-11 15:24:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.96,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.393: INFO: Pod "webserver-deployment-566f96c878-pvgk2" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-pvgk2 webserver-deployment-566f96c878- deployment-8885 3901e8da-514e-4dcc-990f-f9695d0bd10e 13783 0 2023-01-11 15:24:29 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc0055fd890 0xc0055fd891}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-flghh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-flghh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8jx80k-worker-b15lfw,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.394: INFO: Pod "webserver-deployment-566f96c878-rrdlw" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-rrdlw webserver-deployment-566f96c878- deployment-8885 cd8217a3-369d-43ef-b7e0-03da117a5c8d 13751 0 2023-01-11 15:24:27 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc0055fda30 0xc0055fda31}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-11 15:24:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.76\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bz7t6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bz7t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8jx80k-worker-r73y4c,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.76,StartTime:2023-01-11 15:24:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.76,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.394: INFO: Pod "webserver-deployment-566f96c878-t5qgq" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-t5qgq webserver-deployment-566f96c878- deployment-8885 7f96843a-0241-4bb2-8a9e-b4fc8a973549 13797 0 2023-01-11 15:24:29 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc0055fdd30 0xc0055fdd31}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gx78l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gx78l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.395: INFO: Pod "webserver-deployment-566f96c878-tmqz5" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-tmqz5 webserver-deployment-566f96c878- deployment-8885 049ec85b-9b53-4960-aef9-2b44a404d151 13757 0 2023-01-11 15:24:27 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc0055fdef7 0xc0055fdef8}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-11 15:24:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.58\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qf6jh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qf6jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-8jx80k-md-0-chlxb-c9b4c6cfd-zv6w2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 15:24:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.58,StartTime:2023-01-11 15:24:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.395: INFO: Pod "webserver-deployment-566f96c878-tqw56" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-tqw56 webserver-deployment-566f96c878- deployment-8885 821a1c6f-d276-45db-b34e-42523cafbda1 13798 0 2023-01-11 15:24:29 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 78645042-2f54-4feb-ae04-9afb3b667b6f 0xc005684160 0xc005684161}] [] [{kube-controller-manager Update v1 2023-01-11 15:24:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78645042-2f54-4feb-ae04-9afb3b667b6f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nh7n2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nh7n2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 15:24:29.395: INFO: Pod "webserver-deployment-566f96c878-wf6bm" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-wf6bm webserver-deployment-566f96c878- deployment-8885 acc123d8-f7d8-4f