Recent runs || View in Spyglass
PR | lodrem: [pick #1053 to release-1.23] build: build binaries for acr-credential-provider when release |
Result | ABORTED |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 58m39s |
Revision | 1a8cd095c1ef9e2363f192aa1b9405109eb92dc8 |
Refs |
1577 |
... skipping 67 lines ... https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded /home/prow/go/src/sigs.k8s.io/cloud-provider-azure /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure Image Tag is 8cb653a Error response from daemon: manifest for capzci.azurecr.io/azure-cloud-controller-manager:8cb653a not found: manifest unknown: manifest tagged by "8cb653a" is not found Build Linux Azure amd64 cloud controller manager make: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure' make ARCH=amd64 build-ccm-image make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure' docker buildx inspect img-builder > /dev/null || docker buildx create --name img-builder --use error: no builder "img-builder" found img-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally latest: Pulling from tonistiigi/binfmt ... skipping 1341 lines ... # Wait for the kubeconfig to become available. timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-xey0tp-kubeconfig; do sleep 1; done" capz-xey0tp-kubeconfig cluster.x-k8s.io/secret 1 1s # Get kubeconfig and store it locally. kubectl get secrets capz-xey0tp-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done" error: the server doesn't have a resource type "nodes" capz-xey0tp-control-plane-kwmhn NotReady control-plane,master 11s v1.23.5 run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Waiting for 3 control plane machine(s), 2 worker machine(s), and windows machine(s) to become Ready node/capz-xey0tp-control-plane-kwmhn condition met node/capz-xey0tp-control-plane-rlf4h condition met ... skipping 48 lines ... +++ [0428 09:02:10] Building go targets for linux/amd64: vendor/github.com/onsi/ginkgo/ginkgo > non-static build: k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo make[1]: Leaving directory '/home/prow/go/src/k8s.io/kubernetes' Conformance test: not doing test setup. I0428 09:02:13.141415 91946 e2e.go:132] Starting e2e run "fbf89e9c-d38a-4a92-87a5-e12723709633" on Ginkgo node 1 {"msg":"Test Suite starting","total":335,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: [1m1651136533[0m - Will randomize all specs Will run [1m335[0m of [1m7044[0m specs Apr 28 09:02:15.670: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig ... skipping 27 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 28 09:02:16.107: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:02:21.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-667" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":335,"completed":1,"skipped":32,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Kubelet[0m [90mwhen scheduling a busybox Pod with hostAliases[0m [1mshould write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Kubelet ... skipping 11 lines ... Apr 28 09:02:24.265: INFO: The status of Pod busybox-host-aliases9acd388e-dc62-4fa9-ba08-5854c3704ffc is Pending, waiting for it to be Running (with Ready = true) Apr 28 09:02:26.266: INFO: The status of Pod busybox-host-aliases9acd388e-dc62-4fa9-ba08-5854c3704ffc is Running (Ready = true) [AfterEach] [sig-node] Kubelet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:02:26.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-2036" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":2,"skipped":40,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould orphan pods created by rc if delete options say so [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 135 lines ... Apr 28 09:03:11.493: INFO: Deleting pod "simpletest.rc-zwz72" in namespace "gc-8829" Apr 28 09:03:11.549: INFO: Deleting pod "simpletest.rc-zzwk5" in namespace "gc-8829" [AfterEach] [sig-api-machinery] Garbage collector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:03:11.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-8829" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":335,"completed":3,"skipped":57,"failed":0} [90m------------------------------[0m [0m[sig-node] Kubelet[0m [90mwhen scheduling a busybox command in a pod[0m [1mshould print the output to logs [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Kubelet ... skipping 15 lines ... Apr 28 09:03:21.832: INFO: The status of Pod busybox-scheduling-92383f4a-7b59-423a-b178-bf54e567328a is Pending, waiting for it to be Running (with Ready = true) Apr 28 09:03:23.830: INFO: The status of Pod busybox-scheduling-92383f4a-7b59-423a-b178-bf54e567328a is Running (Ready = true) [AfterEach] [sig-node] Kubelet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:03:23.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-8806" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":335,"completed":4,"skipped":57,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] PodTemplates[0m [1mshould run the lifecycle of PodTemplates [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] PodTemplates ... skipping 6 lines ... [It] should run the lifecycle of PodTemplates [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] PodTemplates /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:03:24.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-8603" for this suite. [32m•[0m{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":335,"completed":5,"skipped":64,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Watchers[0m [1mshould be able to start watching from a specific resource version [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Watchers ... skipping 14 lines ... Apr 28 09:03:24.495: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6502 49bdbbc7-df97-4980-a3a8-693f7b2ec2b8 4632 0 2022-04-28 09:03:24 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-04-28 09:03:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 28 09:03:24.496: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6502 49bdbbc7-df97-4980-a3a8-693f7b2ec2b8 4633 0 2022-04-28 09:03:24 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-04-28 09:03:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:03:24.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-6502" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":335,"completed":6,"skipped":68,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicationController[0m [1mshould adopt matching pods on creation [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicationController ... skipping 25 lines ... [1mSTEP[0m: When a replication controller with a matching selector is created [1mSTEP[0m: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:03:50.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-3281" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":335,"completed":7,"skipped":100,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Subpath[0m [90mAtomic writer volumes[0m [1mshould support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Subpath ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating pod pod-subpath-test-configmap-dvxk [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Apr 28 09:03:51.054: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dvxk" in namespace "subpath-5517" to be "Succeeded or Failed" Apr 28 09:03:51.088: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Pending", Reason="", readiness=false. Elapsed: 33.416399ms Apr 28 09:03:53.107: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052970268s Apr 28 09:03:55.125: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070414167s Apr 28 09:03:57.143: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088286908s Apr 28 09:03:59.162: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107434476s Apr 28 09:04:01.181: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.127163953s ... skipping 7 lines ... Apr 28 09:04:17.328: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Running", Reason="", readiness=true. Elapsed: 26.273325749s Apr 28 09:04:19.347: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Running", Reason="", readiness=true. Elapsed: 28.292901382s Apr 28 09:04:21.382: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Running", Reason="", readiness=true. Elapsed: 30.327453951s Apr 28 09:04:23.402: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Running", Reason="", readiness=true. Elapsed: 32.347337102s Apr 28 09:04:25.421: INFO: Pod "pod-subpath-test-configmap-dvxk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.366880908s [1mSTEP[0m: Saw pod success Apr 28 09:04:25.421: INFO: Pod "pod-subpath-test-configmap-dvxk" satisfied condition "Succeeded or Failed" Apr 28 09:04:25.437: INFO: Trying to get logs from node capz-xey0tp-md-0-rpjfq pod pod-subpath-test-configmap-dvxk container test-container-subpath-configmap-dvxk: <nil> [1mSTEP[0m: delete the pod Apr 28 09:04:25.518: INFO: Waiting for pod pod-subpath-test-configmap-dvxk to disappear Apr 28 09:04:25.535: INFO: Pod pod-subpath-test-configmap-dvxk no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-configmap-dvxk Apr 28 09:04:25.535: INFO: Deleting pod "pod-subpath-test-configmap-dvxk" in namespace "subpath-5517" [AfterEach] [sig-storage] Subpath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:04:25.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "subpath-5517" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":335,"completed":8,"skipped":130,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould update annotations on modification [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 12 lines ... Apr 28 09:04:27.781: INFO: The status of Pod annotationupdated2a32014-e129-48d0-8b61-ceedce61b9bb is Running (Ready = true) Apr 28 09:04:28.368: INFO: Successfully updated pod "annotationupdated2a32014-e129-48d0-8b61-ceedce61b9bb" [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:04:32.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-8275" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":335,"completed":9,"skipped":153,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl cluster-info[0m [1mshould check if Kubernetes control plane services is included in cluster-info [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 12 lines ... Apr 28 09:04:33.049: INFO: stderr: "" Apr 28 09:04:33.049: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://capz-xey0tp-27a9bbd5.northcentralus.cloudapp.azure.com:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:04:33.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-7914" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":335,"completed":10,"skipped":172,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould support remote command execution over websockets [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 13 lines ... Apr 28 09:04:33.286: INFO: The status of Pod pod-exec-websocket-0d386be2-5779-423b-a16d-b6c8778321cd is Pending, waiting for it to be Running (with Ready = true) Apr 28 09:04:35.304: INFO: The status of Pod pod-exec-websocket-0d386be2-5779-423b-a16d-b6c8778321cd is Running (Ready = true) [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:04:35.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-862" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":335,"completed":11,"skipped":174,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicaSet[0m [1mshould serve a basic image on each replica with a public image [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicaSet ... skipping 12 lines ... Apr 28 09:04:45.775: INFO: Trying to dial the pod Apr 28 09:04:50.838: INFO: Controller my-hostname-basic-89b2fbbe-a420-42ed-a126-757faa972ef5: Got expected result from replica 1 [my-hostname-basic-89b2fbbe-a420-42ed-a126-757faa972ef5-gvs5c]: "my-hostname-basic-89b2fbbe-a420-42ed-a126-757faa972ef5-gvs5c", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:04:50.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-3856" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":335,"completed":12,"skipped":185,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Downward API[0m [1mshould provide host IP as an env var [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Downward API ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward api env vars Apr 28 09:04:51.032: INFO: Waiting up to 5m0s for pod "downward-api-13f3cbc1-7bcc-49a3-829f-5740b57be551" in namespace "downward-api-8433" to be "Succeeded or Failed" Apr 28 09:04:51.057: INFO: Pod "downward-api-13f3cbc1-7bcc-49a3-829f-5740b57be551": Phase="Pending", Reason="", readiness=false. Elapsed: 25.519178ms Apr 28 09:04:53.079: INFO: Pod "downward-api-13f3cbc1-7bcc-49a3-829f-5740b57be551": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046815335s Apr 28 09:04:55.098: INFO: Pod "downward-api-13f3cbc1-7bcc-49a3-829f-5740b57be551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065666862s [1mSTEP[0m: Saw pod success Apr 28 09:04:55.098: INFO: Pod "downward-api-13f3cbc1-7bcc-49a3-829f-5740b57be551" satisfied condition "Succeeded or Failed" Apr 28 09:04:55.115: INFO: Trying to get logs from node capz-xey0tp-md-0-rpjfq pod downward-api-13f3cbc1-7bcc-49a3-829f-5740b57be551 container dapi-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:04:55.175: INFO: Waiting for pod downward-api-13f3cbc1-7bcc-49a3-829f-5740b57be551 to disappear Apr 28 09:04:55.192: INFO: Pod downward-api-13f3cbc1-7bcc-49a3-829f-5740b57be551 no longer exists [AfterEach] [sig-node] Downward API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:04:55.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-8433" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":335,"completed":13,"skipped":197,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1mupdates should be reflected in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 12 lines ... [1mSTEP[0m: Updating configmap projected-configmap-test-upd-a511b4fe-2d7f-4790-8661-384ac2c1e1c0 [1mSTEP[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:04:59.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-2293" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":14,"skipped":259,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Servers with support for Table transformation[0m [1mshould return a 406 for a backend which does not implement metadata [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation ... skipping 8 lines ... [It] should return a 406 for a backend which does not implement metadata [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:04:59.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "tables-3580" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":335,"completed":15,"skipped":260,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould support pod readiness gates [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:775[0m [BeforeEach] [sig-node] Pods ... skipping 12 lines ... [1mSTEP[0m: patching pod status with condition "k8s.io/test-condition2" to true [1mSTEP[0m: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:05:14.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-1484" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeConformance]","total":335,"completed":16,"skipped":295,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be immutable if `immutable` field is set [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 6 lines ... [It] should be immutable if `immutable` field is set [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:05:14.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-1880" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":335,"completed":17,"skipped":328,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl run pod[0m [1mshould create a pod from an image when restart is Never [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 20 lines ... Apr 28 09:05:17.421: INFO: stderr: "" Apr 28 09:05:17.421: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:05:17.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-9616" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":335,"completed":18,"skipped":332,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicationController[0m [1mshould serve a basic image on each replica with a public image [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicationController ... skipping 14 lines ... Apr 28 09:05:19.712: INFO: Trying to dial the pod Apr 28 09:05:24.767: INFO: Controller my-hostname-basic-0496e4d8-ce1a-4bcc-b804-c70855245a36: Got expected result from replica 1 [my-hostname-basic-0496e4d8-ce1a-4bcc-b804-c70855245a36-b959w]: "my-hostname-basic-0496e4d8-ce1a-4bcc-b804-c70855245a36-b959w", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:05:24.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-2171" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":335,"completed":19,"skipped":350,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould set mode on item file [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 28 09:05:24.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0772361a-6811-4cb2-9bff-d55f7f3131c6" in namespace "projected-9844" to be "Succeeded or Failed" Apr 28 09:05:24.979: INFO: Pod "downwardapi-volume-0772361a-6811-4cb2-9bff-d55f7f3131c6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.927245ms Apr 28 09:05:26.997: INFO: Pod "downwardapi-volume-0772361a-6811-4cb2-9bff-d55f7f3131c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036336388s [1mSTEP[0m: Saw pod success Apr 28 09:05:26.997: INFO: Pod "downwardapi-volume-0772361a-6811-4cb2-9bff-d55f7f3131c6" satisfied condition "Succeeded or Failed" Apr 28 09:05:27.014: INFO: Trying to get logs from node capz-xey0tp-md-0-rpjfq pod downwardapi-volume-0772361a-6811-4cb2-9bff-d55f7f3131c6 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:05:27.090: INFO: Waiting for pod downwardapi-volume-0772361a-6811-4cb2-9bff-d55f7f3131c6 to disappear Apr 28 09:05:27.107: INFO: Pod downwardapi-volume-0772361a-6811-4cb2-9bff-d55f7f3131c6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:05:27.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-9844" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":20,"skipped":352,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mshould *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 13 lines ... Apr 28 09:05:31.360: INFO: Initial restart count of pod liveness-61e59efe-58c6-41c4-a096-2c7377fd6ee1 is 0 [1mSTEP[0m: deleting the pod [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:09:31.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-probe-7412" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":335,"completed":21,"skipped":385,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-apps] Deployment[0m [1mRecreateDeployment should delete old pods and create new ones [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Deployment ... skipping 26 lines ... Apr 28 09:09:34.249: INFO: Pod "test-recreate-deployment-5b99bd5487-2h72v" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5b99bd5487-2h72v test-recreate-deployment-5b99bd5487- deployment-6182 50422b36-5ca6-4a0b-9b65-4442c650c202 6302 0 2022-04-28 09:09:34 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5b99bd5487 f7a585e8-2267-4d15-a002-ea222c5e56f0 0xc00242db07 0xc00242db08}] [] [{Go-http-client Update v1 2022-04-28 09:09:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {kube-controller-manager Update v1 2022-04-28 09:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7a585e8-2267-4d15-a002-ea222c5e56f0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zps5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zps5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-xey0tp-md-0-rpjfq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-28 09:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-28 09:09:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-28 09:09:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-28 09:09:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:,StartTime:2022-04-28 09:09:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:09:34.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-6182" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":335,"completed":22,"skipped":386,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1mshould be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-f68bb61d-5db6-4266-b0b3-65d81668ffe5 [1mSTEP[0m: Creating a pod to test consume configMaps Apr 28 09:09:34.464: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5841c190-cfb2-4881-9473-d6764adf2ce4" in namespace "projected-3836" to be "Succeeded or Failed" Apr 28 09:09:34.484: INFO: Pod "pod-projected-configmaps-5841c190-cfb2-4881-9473-d6764adf2ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.924366ms Apr 28 09:09:36.501: INFO: Pod "pod-projected-configmaps-5841c190-cfb2-4881-9473-d6764adf2ce4": Phase="Running", Reason="", readiness=true. Elapsed: 2.037367923s Apr 28 09:09:38.519: INFO: Pod "pod-projected-configmaps-5841c190-cfb2-4881-9473-d6764adf2ce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054984795s [1mSTEP[0m: Saw pod success Apr 28 09:09:38.519: INFO: Pod "pod-projected-configmaps-5841c190-cfb2-4881-9473-d6764adf2ce4" satisfied condition "Succeeded or Failed" Apr 28 09:09:38.535: INFO: Trying to get logs from node capz-xey0tp-md-0-cr9v6 pod pod-projected-configmaps-5841c190-cfb2-4881-9473-d6764adf2ce4 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:09:38.618: INFO: Waiting for pod pod-projected-configmaps-5841c190-cfb2-4881-9473-d6764adf2ce4 to disappear Apr 28 09:09:38.635: INFO: Pod pod-projected-configmaps-5841c190-cfb2-4881-9473-d6764adf2ce4 no longer exists [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:09:38.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-3836" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":23,"skipped":417,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould have session affinity work for NodePort service [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 51 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:09:51.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-3520" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":335,"completed":24,"skipped":466,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] PodTemplates[0m [1mshould delete a collection of pod templates [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] PodTemplates ... skipping 15 lines ... [1mSTEP[0m: check that the list of pod templates matches the requested quantity Apr 28 09:09:51.587: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:09:51.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-5915" for this suite. [32m•[0m{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":335,"completed":25,"skipped":491,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [90mWhen creating a container with runAsUser[0m [1mshould run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Security Context ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 28 09:09:51.802: INFO: Waiting up to 5m0s for pod "busybox-user-65534-44d96167-e60f-4ef2-afda-bdb8dd7ef29a" in namespace "security-context-test-6939" to be "Succeeded or Failed" Apr 28 09:09:51.820: INFO: Pod "busybox-user-65534-44d96167-e60f-4ef2-afda-bdb8dd7ef29a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.856357ms Apr 28 09:09:53.841: INFO: Pod "busybox-user-65534-44d96167-e60f-4ef2-afda-bdb8dd7ef29a": Phase="Running", Reason="", readiness=true. Elapsed: 2.038571474s Apr 28 09:09:55.859: INFO: Pod "busybox-user-65534-44d96167-e60f-4ef2-afda-bdb8dd7ef29a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05692844s Apr 28 09:09:55.859: INFO: Pod "busybox-user-65534-44d96167-e60f-4ef2-afda-bdb8dd7ef29a" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:09:55.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-6939" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":26,"skipped":497,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 17 lines ... [1mSTEP[0m: deleting the pod [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:10:22.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-8598" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":335,"completed":27,"skipped":499,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Ingress API[0m [1mshould support creating Ingress API operations [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Ingress API ... skipping 26 lines ... [1mSTEP[0m: deleting [1mSTEP[0m: deleting a collection [AfterEach] [sig-network] Ingress API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:10:22.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ingress-8141" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":335,"completed":28,"skipped":502,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould run through the lifecycle of Pods and PodStatus [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 34 lines ... Apr 28 09:10:28.157: INFO: observed event type MODIFIED Apr 28 09:10:28.185: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:10:28.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-9065" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":335,"completed":29,"skipped":519,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-a501df33-76eb-4cc2-b52f-3f9da4897317 [1mSTEP[0m: Creating a pod to test consume secrets Apr 28 09:10:28.443: INFO: Waiting up to 5m0s for pod "pod-secrets-2f116224-fbea-45fa-8f9d-f4e625c68e14" in namespace "secrets-180" to be "Succeeded or Failed" Apr 28 09:10:28.470: INFO: Pod "pod-secrets-2f116224-fbea-45fa-8f9d-f4e625c68e14": Phase="Pending", Reason="", readiness=false. Elapsed: 26.313623ms Apr 28 09:10:30.487: INFO: Pod "pod-secrets-2f116224-fbea-45fa-8f9d-f4e625c68e14": Phase="Running", Reason="", readiness=true. Elapsed: 2.043953058s Apr 28 09:10:32.506: INFO: Pod "pod-secrets-2f116224-fbea-45fa-8f9d-f4e625c68e14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063060369s [1mSTEP[0m: Saw pod success Apr 28 09:10:32.507: INFO: Pod "pod-secrets-2f116224-fbea-45fa-8f9d-f4e625c68e14" satisfied condition "Succeeded or Failed" Apr 28 09:10:32.529: INFO: Trying to get logs from node capz-xey0tp-md-0-cr9v6 pod pod-secrets-2f116224-fbea-45fa-8f9d-f4e625c68e14 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Apr 28 09:10:32.588: INFO: Waiting for pod pod-secrets-2f116224-fbea-45fa-8f9d-f4e625c68e14 to disappear Apr 28 09:10:32.605: INFO: Pod pod-secrets-2f116224-fbea-45fa-8f9d-f4e625c68e14 no longer exists [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:10:32.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-180" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":30,"skipped":559,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Downward API[0m [1mshould provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Downward API ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward api env vars Apr 28 09:10:32.799: INFO: Waiting up to 5m0s for pod "downward-api-5b124152-3d5f-464c-83e0-49ab75ac066c" in namespace "downward-api-8569" to be "Succeeded or Failed" Apr 28 09:10:32.829: INFO: Pod "downward-api-5b124152-3d5f-464c-83e0-49ab75ac066c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.765289ms Apr 28 09:10:34.849: INFO: Pod "downward-api-5b124152-3d5f-464c-83e0-49ab75ac066c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.050355797s [1mSTEP[0m: Saw pod success Apr 28 09:10:34.849: INFO: Pod "downward-api-5b124152-3d5f-464c-83e0-49ab75ac066c" satisfied condition "Succeeded or Failed" Apr 28 09:10:34.866: INFO: Trying to get logs from node capz-xey0tp-md-0-rpjfq pod downward-api-5b124152-3d5f-464c-83e0-49ab75ac066c container dapi-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:10:34.937: INFO: Waiting for pod downward-api-5b124152-3d5f-464c-83e0-49ab75ac066c to disappear Apr 28 09:10:34.955: INFO: Pod downward-api-5b124152-3d5f-464c-83e0-49ab75ac066c no longer exists [AfterEach] [sig-node] Downward API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:10:34.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-8569" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":335,"completed":31,"skipped":562,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs Apr 28 09:10:35.135: INFO: Waiting up to 5m0s for pod "pod-ab1cbd2a-e11e-475d-8c1a-43de5f5d8cf2" in namespace "emptydir-5083" to be "Succeeded or Failed" Apr 28 09:10:35.153: INFO: Pod "pod-ab1cbd2a-e11e-475d-8c1a-43de5f5d8cf2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.60752ms Apr 28 09:10:37.171: INFO: Pod "pod-ab1cbd2a-e11e-475d-8c1a-43de5f5d8cf2": Phase="Running", Reason="", readiness=true. Elapsed: 2.036193392s Apr 28 09:10:39.188: INFO: Pod "pod-ab1cbd2a-e11e-475d-8c1a-43de5f5d8cf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053335645s [1mSTEP[0m: Saw pod success Apr 28 09:10:39.188: INFO: Pod "pod-ab1cbd2a-e11e-475d-8c1a-43de5f5d8cf2" satisfied condition "Succeeded or Failed" Apr 28 09:10:39.205: INFO: Trying to get logs from node capz-xey0tp-md-0-cr9v6 pod pod-ab1cbd2a-e11e-475d-8c1a-43de5f5d8cf2 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:10:39.272: INFO: Waiting for pod pod-ab1cbd2a-e11e-475d-8c1a-43de5f5d8cf2 to disappear Apr 28 09:10:39.288: INFO: Pod pod-ab1cbd2a-e11e-475d-8c1a-43de5f5d8cf2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:10:39.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-5083" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":32,"skipped":563,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould update labels on modification [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 12 lines ... Apr 28 09:10:41.517: INFO: The status of Pod labelsupdate136193fc-c86d-4ebc-8ff6-3b2d7fe9ddd5 is Running (Ready = true) Apr 28 09:10:42.104: INFO: Successfully updated pod "labelsupdate136193fc-c86d-4ebc-8ff6-3b2d7fe9ddd5" [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:10:44.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-141" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":335,"completed":33,"skipped":583,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium Apr 28 09:10:44.366: INFO: Waiting up to 5m0s for pod "pod-6ab48324-2b72-4343-b84b-575e12a78ba5" in namespace "emptydir-7508" to be "Succeeded or Failed" Apr 28 09:10:44.385: INFO: Pod "pod-6ab48324-2b72-4343-b84b-575e12a78ba5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.964222ms Apr 28 09:10:46.404: INFO: Pod "pod-6ab48324-2b72-4343-b84b-575e12a78ba5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.037265467s [1mSTEP[0m: Saw pod success Apr 28 09:10:46.404: INFO: Pod "pod-6ab48324-2b72-4343-b84b-575e12a78ba5" satisfied condition "Succeeded or Failed" Apr 28 09:10:46.420: INFO: Trying to get logs from node capz-xey0tp-md-0-rpjfq pod pod-6ab48324-2b72-4343-b84b-575e12a78ba5 container test-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:10:46.482: INFO: Waiting for pod pod-6ab48324-2b72-4343-b84b-575e12a78ba5 to disappear Apr 28 09:10:46.499: INFO: Pod pod-6ab48324-2b72-4343-b84b-575e12a78ba5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:10:46.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-7508" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":34,"skipped":611,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mworks for CRD without validation schema [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 24 lines ... Apr 28 09:10:54.836: INFO: stderr: "" Apr 28 09:10:54.836: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7078-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:10:58.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-1941" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":335,"completed":35,"skipped":627,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected secret[0m [1mshould be consumable in multiple volumes in a pod [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected secret ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name projected-secret-test-1559abbd-5caa-471c-87d1-9426a0030668 [1mSTEP[0m: Creating a pod to test consume secrets Apr 28 09:10:58.565: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3febdcac-054f-4f6a-a3e5-69352fccd1ad" in namespace "projected-5304" to be "Succeeded or Failed" Apr 28 09:10:58.587: INFO: Pod "pod-projected-secrets-3febdcac-054f-4f6a-a3e5-69352fccd1ad": Phase="Pending", Reason="", readiness=false. Elapsed: 22.309729ms Apr 28 09:11:00.604: INFO: Pod "pod-projected-secrets-3febdcac-054f-4f6a-a3e5-69352fccd1ad": Phase="Running", Reason="", readiness=true. Elapsed: 2.038694818s Apr 28 09:11:02.620: INFO: Pod "pod-projected-secrets-3febdcac-054f-4f6a-a3e5-69352fccd1ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054769722s [1mSTEP[0m: Saw pod success Apr 28 09:11:02.620: INFO: Pod "pod-projected-secrets-3febdcac-054f-4f6a-a3e5-69352fccd1ad" satisfied condition "Succeeded or Failed" Apr 28 09:11:02.635: INFO: Trying to get logs from node capz-xey0tp-md-0-cr9v6 pod pod-projected-secrets-3febdcac-054f-4f6a-a3e5-69352fccd1ad container secret-volume-test: <nil> [1mSTEP[0m: delete the pod Apr 28 09:11:02.707: INFO: Waiting for pod pod-projected-secrets-3febdcac-054f-4f6a-a3e5-69352fccd1ad to disappear Apr 28 09:11:02.723: INFO: Pod pod-projected-secrets-3febdcac-054f-4f6a-a3e5-69352fccd1ad no longer exists [AfterEach] [sig-storage] Projected secret /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:11:02.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-5304" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":335,"completed":36,"skipped":630,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium Apr 28 09:11:02.902: INFO: Waiting up to 5m0s for pod "pod-5c32b77e-39b6-483f-9403-bfb93ea9805b" in namespace "emptydir-3315" to be "Succeeded or Failed" Apr 28 09:11:02.924: INFO: Pod "pod-5c32b77e-39b6-483f-9403-bfb93ea9805b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.02864ms Apr 28 09:11:04.941: INFO: Pod "pod-5c32b77e-39b6-483f-9403-bfb93ea9805b": Phase="Running", Reason="", readiness=true. Elapsed: 2.038426918s Apr 28 09:11:06.958: INFO: Pod "pod-5c32b77e-39b6-483f-9403-bfb93ea9805b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056124424s [1mSTEP[0m: Saw pod success Apr 28 09:11:06.958: INFO: Pod "pod-5c32b77e-39b6-483f-9403-bfb93ea9805b" satisfied condition "Succeeded or Failed" Apr 28 09:11:06.974: INFO: Trying to get logs from node capz-xey0tp-md-0-rpjfq pod pod-5c32b77e-39b6-483f-9403-bfb93ea9805b container test-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:11:07.046: INFO: Waiting for pod pod-5c32b77e-39b6-483f-9403-bfb93ea9805b to disappear Apr 28 09:11:07.062: INFO: Pod pod-5c32b77e-39b6-483f-9403-bfb93ea9805b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:11:07.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-3315" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":37,"skipped":672,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicationController[0m [1mshould release no longer matching pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicationController ... skipping 13 lines ... Apr 28 09:11:12.280: INFO: Pod name pod-release: Found 1 pods out of 1 [1mSTEP[0m: Then the pod is released [AfterEach] [sig-apps] ReplicationController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:11:12.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-2371" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":335,"completed":38,"skipped":678,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide DNS for pods for Subdomain [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 19 lines ... Apr 28 09:11:14.711: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:14.728: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:14.745: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:14.762: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:14.779: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:14.795: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:14.795: INFO: Lookups using dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2891.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2891.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_udp@dns-test-service-2.dns-2891.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2891.svc.cluster.local] Apr 28 09:11:19.812: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:19.831: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:19.881: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:19.898: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:19.931: INFO: Lookups using dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local] Apr 28 09:11:24.813: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:24.831: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:24.882: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:24.898: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:24.933: INFO: Lookups using dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local] Apr 28 09:11:29.812: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:29.829: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:29.879: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:29.896: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:29.928: INFO: Lookups using dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local] Apr 28 09:11:34.817: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:34.845: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:34.899: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:34.919: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:34.954: INFO: Lookups using dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local] Apr 28 09:11:39.814: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:39.831: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:39.882: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:39.898: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local from pod dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a: the server could not find the requested resource (get pods dns-test-144924dc-8b27-4538-8aac-d71b6590c59a) Apr 28 09:11:39.936: INFO: Lookups using dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2891.svc.cluster.local] Apr 28 09:11:44.938: INFO: DNS probes using dns-2891/dns-test-144924dc-8b27-4538-8aac-d71b6590c59a succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test headless service [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:11:45.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-2891" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":335,"completed":39,"skipped":684,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Proxy[0m [90mversion v1[0m [1mA set of valid responses are returned for both pod and service ProxyWithPath [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] version v1 ... skipping 39 lines ... Apr 28 09:11:47.571: INFO: Starting http.Client for https://capz-xey0tp-27a9bbd5.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/proxy-7366/services/test-service/proxy/some/path/with/PUT Apr 28 09:11:47.589: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:11:47.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "proxy-7366" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":335,"completed":40,"skipped":698,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould be able to deny attaching pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 24 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:11:53.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-4575" for this suite. [1mSTEP[0m: Destroying namespace "webhook-4575-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":335,"completed":41,"skipped":701,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Job[0m [1mshould delete a job [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Job ... skipping 13 lines ... Apr 28 09:11:58.212: INFO: Terminating Job.batch foo pods took: 100.391092ms [1mSTEP[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:12:30.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "job-8809" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":335,"completed":42,"skipped":722,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould provide container's cpu limit [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 28 09:12:31.040: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e40bc1c8-874e-421d-8b7d-f81fdbacd136" in namespace "downward-api-8466" to be "Succeeded or Failed" Apr 28 09:12:31.084: INFO: Pod "downwardapi-volume-e40bc1c8-874e-421d-8b7d-f81fdbacd136": Phase="Pending", Reason="", readiness=false. Elapsed: 43.612689ms Apr 28 09:12:33.101: INFO: Pod "downwardapi-volume-e40bc1c8-874e-421d-8b7d-f81fdbacd136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06047487s [1mSTEP[0m: Saw pod success Apr 28 09:12:33.101: INFO: Pod "downwardapi-volume-e40bc1c8-874e-421d-8b7d-f81fdbacd136" satisfied condition "Succeeded or Failed" Apr 28 09:12:33.117: INFO: Trying to get logs from node capz-xey0tp-md-0-rpjfq pod downwardapi-volume-e40bc1c8-874e-421d-8b7d-f81fdbacd136 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:12:33.184: INFO: Waiting for pod downwardapi-volume-e40bc1c8-874e-421d-8b7d-f81fdbacd136 to disappear Apr 28 09:12:33.200: INFO: Pod downwardapi-volume-e40bc1c8-874e-421d-8b7d-f81fdbacd136 no longer exists [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:12:33.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-8466" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":335,"completed":43,"skipped":727,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Sysctls [LinuxOnly] [NodeConformance][0m [1mshould not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:157[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] ... skipping 7 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:157 [1mSTEP[0m: Creating a pod with an ignorelisted, but not allowlisted sysctl on the node [1mSTEP[0m: Wait for pod failed reason [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:12:35.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sysctl-5248" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":335,"completed":44,"skipped":735,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1mshould be consumable from pods in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-8f246f3d-1a94-429c-ae35-51d9308c1c89 [1mSTEP[0m: Creating a pod to test consume configMaps Apr 28 09:12:35.635: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83ed1254-cc02-44f9-a127-9abed5525ff1" in namespace "projected-7998" to be "Succeeded or Failed" Apr 28 09:12:35.657: INFO: Pod "pod-projected-configmaps-83ed1254-cc02-44f9-a127-9abed5525ff1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.159216ms Apr 28 09:12:37.674: INFO: Pod "pod-projected-configmaps-83ed1254-cc02-44f9-a127-9abed5525ff1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039128927s [1mSTEP[0m: Saw pod success Apr 28 09:12:37.674: INFO: Pod "pod-projected-configmaps-83ed1254-cc02-44f9-a127-9abed5525ff1" satisfied condition "Succeeded or Failed" Apr 28 09:12:37.689: INFO: Trying to get logs from node capz-xey0tp-md-0-rpjfq pod pod-projected-configmaps-83ed1254-cc02-44f9-a127-9abed5525ff1 container agnhost-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:12:37.908: INFO: Waiting for pod pod-projected-configmaps-83ed1254-cc02-44f9-a127-9abed5525ff1 to disappear Apr 28 09:12:37.969: INFO: Pod pod-projected-configmaps-83ed1254-cc02-44f9-a127-9abed5525ff1 no longer exists [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:12:37.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-7998" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":335,"completed":45,"skipped":741,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 28 09:12:38.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a71ff00-aa9a-401a-94eb-f67947288477" in namespace "downward-api-6381" to be "Succeeded or Failed" Apr 28 09:12:38.231: INFO: Pod "downwardapi-volume-5a71ff00-aa9a-401a-94eb-f67947288477": Phase="Pending", Reason="", readiness=false. Elapsed: 27.714783ms Apr 28 09:12:40.248: INFO: Pod "downwardapi-volume-5a71ff00-aa9a-401a-94eb-f67947288477": Phase="Running", Reason="", readiness=true. Elapsed: 2.04460113s Apr 28 09:12:42.265: INFO: Pod "downwardapi-volume-5a71ff00-aa9a-401a-94eb-f67947288477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061661417s [1mSTEP[0m: Saw pod success Apr 28 09:12:42.265: INFO: Pod "downwardapi-volume-5a71ff00-aa9a-401a-94eb-f67947288477" satisfied condition "Succeeded or Failed" Apr 28 09:12:42.281: INFO: Trying to get logs from node capz-xey0tp-md-0-cr9v6 pod downwardapi-volume-5a71ff00-aa9a-401a-94eb-f67947288477 container client-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:12:42.352: INFO: Waiting for pod downwardapi-volume-5a71ff00-aa9a-401a-94eb-f67947288477 to disappear Apr 28 09:12:42.368: INFO: Pod downwardapi-volume-5a71ff00-aa9a-401a-94eb-f67947288477 no longer exists [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:12:42.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-6381" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":46,"skipped":761,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] StatefulSet[0m [90mBasic StatefulSet functionality [StatefulSetBasic][0m [1mshould perform canary updates and phased rolling updates of template modifications [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] StatefulSet ... skipping 48 lines ... Apr 28 09:14:33.477: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 09:14:33.493: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:14:33.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "statefulset-9103" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":335,"completed":47,"skipped":776,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould complete a service status lifecycle [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 43 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:14:33.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-9173" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":335,"completed":48,"skipped":778,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould be able to deny custom resource creation, update and deletion [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 27 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:14:41.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-4679" for this suite. [1mSTEP[0m: Destroying namespace "webhook-4679-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":335,"completed":49,"skipped":784,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould be able to deny pod and configmap creation [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 29 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:14:57.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-3004" for this suite. [1mSTEP[0m: Destroying namespace "webhook-3004-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":335,"completed":50,"skipped":812,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin Apr 28 09:14:57.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0380303f-268f-478a-96f0-5dc6247f7dad" in namespace "projected-6476" to be "Succeeded or Failed" Apr 28 09:14:57.881: INFO: Pod "downwardapi-volume-0380303f-268f-478a-96f0-5dc6247f7dad": Phase="Pending", Reason="", readiness=false. Elapsed: 25.188346ms Apr 28 09:14:59.899: INFO: Pod "downwardapi-volume-0380303f-268f-478a-96f0-5dc6247f7dad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043984528s Apr 28 09:15:01.916: INFO: Pod "downwardapi-volume-0380303f-268f-478a-96f0-5dc6247f7dad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06096557s [1mSTEP[0m: Saw pod success Apr 28 09:15:01.917: INFO: Pod "downwardapi-volume-0380303f-268f-478a-96f0-5dc6247f7dad" satisfied condition "Succeeded or Failed" Apr 28 09:15:01.932: INFO: Trying to get logs from node capz-xey0tp-md-0-cr9v6 pod downwardapi-volume-0380303f-268f-478a-96f0-5dc6247f7dad container client-container: <nil> [1mSTEP[0m: delete the pod Apr 28 09:15:02.025: INFO: Waiting for pod downwardapi-volume-0380303f-268f-478a-96f0-5dc6247f7dad to disappear Apr 28 09:15:02.041: INFO: Pod downwardapi-volume-0380303f-268f-478a-96f0-5dc6247f7dad no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:15:02.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-6476" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":335,"completed":51,"skipped":851,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould mutate custom resource with different stored version [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 24 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 28 09:15:09.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-9469" for this suite. [1mSTEP[0m: Destroying namespace "webhook-9469-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":335,"completed":52,"skipped":859,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mshould *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 15 lines ...