Recent runs || View in Spyglass
PR | k8s-infra-cherrypick-robot: [release-1.23] Disable floating IP on ILB IPv6 rule |
Result | ABORTED |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 1h33m |
Revision | 33bead58c60fbd8e3d1c908cfb699b42549237ce |
Refs |
1721 |
... skipping 67 lines ... https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded /home/prow/go/src/sigs.k8s.io/cloud-provider-azure /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure Image Tag is 27bd8ed Error response from daemon: manifest for capzci.azurecr.io/azure-cloud-controller-manager:27bd8ed not found: manifest unknown: manifest tagged by "27bd8ed" is not found Build Linux Azure amd64 cloud controller manager make: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure' make ARCH=amd64 build-ccm-image make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure' docker buildx inspect img-builder > /dev/null || docker buildx create --name img-builder --use error: no builder "img-builder" found img-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally latest: Pulling from tonistiigi/binfmt ... skipping 1452 lines ... # Wait for the kubeconfig to become available. timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-05t52q-kubeconfig; do sleep 1; done" capz-05t52q-kubeconfig cluster.x-k8s.io/secret 1 1s # Get kubeconfig and store it locally. kubectl get secrets capz-05t52q-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done" error: the server doesn't have a resource type "nodes" capz-05t52q-control-plane-rvxpt NotReady control-plane,master 7s v1.23.5 run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Waiting for 3 control plane machine(s), 2 worker machine(s), and windows machine(s) to become Ready node/capz-05t52q-control-plane-5t9v7 condition met node/capz-05t52q-control-plane-8p455 condition met ... skipping 48 lines ... +++ [0514 12:28:36] Building go targets for linux/amd64: vendor/github.com/onsi/ginkgo/ginkgo > non-static build: k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo make[1]: Leaving directory '/home/prow/go/src/k8s.io/kubernetes' Conformance test: not doing test setup. I0514 12:28:39.648756 91027 e2e.go:132] Starting e2e run "88dfefc3-8fe9-4db9-bedc-b35ebc39ca37" on Ginkgo node 1 {"msg":"Test Suite starting","total":335,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: [1m1652531319[0m - Will randomize all specs Will run [1m335[0m of [1m7044[0m specs May 14 12:28:42.378: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig ... skipping 26 lines ... [BeforeEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve a basic endpoint from pods [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: creating service endpoint-test2 in namespace services-5735 [1mSTEP[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-5735 to expose endpoints map[] May 14 12:28:42.932: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found May 14 12:28:43.987: INFO: successfully validated that service endpoint-test2 in namespace services-5735 exposes endpoints map[] [1mSTEP[0m: Creating pod pod1 in namespace services-5735 May 14 12:28:44.052: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 14 12:28:46.071: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 14 12:28:48.072: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 14 12:28:50.071: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) ... skipping 36 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:29:09.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-5735" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":335,"completed":1,"skipped":36,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-auth] ServiceAccounts[0m [1mshould allow opting out of API token automount [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-auth] ServiceAccounts ... skipping 25 lines ... May 14 12:29:10.181: INFO: created pod pod-service-account-nomountsa-nomountspec May 14 12:29:10.181: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:29:10.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-9101" for this suite. [32m•[0m{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":335,"completed":2,"skipped":161,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-instrumentation] Events[0m [1mshould delete a collection of events [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-instrumentation] Events ... skipping 15 lines ... [1mSTEP[0m: check that the list of events matches the requested quantity May 14 12:29:10.517: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:29:10.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-7348" for this suite. [32m•[0m{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":335,"completed":3,"skipped":162,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mshould be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:219[0m [BeforeEach] [sig-node] Probing container ... skipping 14 lines ... May 14 12:30:03.259: INFO: Restart count of pod container-probe-817/busybox-87d751a8-0e7c-4dab-9888-792fbeec39a8 is now 1 (48.46631985s elapsed) [1mSTEP[0m: deleting the pod [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:30:03.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-probe-817" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":335,"completed":4,"skipped":171,"failed":0} [90m------------------------------[0m [0m[sig-api-machinery] Watchers[0m [1mshould observe an object deletion if it stops meeting the requirements of the selector [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Watchers ... skipping 23 lines ... May 14 12:30:13.756: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4517 96f40539-fd63-4780-9f58-43212af203df 2962 0 2022-05-14 12:30:03 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-14 12:30:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 14 12:30:13.756: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4517 96f40539-fd63-4780-9f58-43212af203df 2963 0 2022-05-14 12:30:03 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-14 12:30:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:30:13.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-4517" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":335,"completed":5,"skipped":171,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Subpath[0m [90mAtomic writer volumes[0m [1mshould support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Subpath ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating pod pod-subpath-test-configmap-d8rl [1mSTEP[0m: Creating a pod to test atomic-volume-subpath May 14 12:30:13.986: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d8rl" in namespace "subpath-3835" to be "Succeeded or Failed" May 14 12:30:14.005: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Pending", Reason="", readiness=false. Elapsed: 19.019212ms May 14 12:30:16.024: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038062233s May 14 12:30:18.043: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Running", Reason="", readiness=true. Elapsed: 4.05688584s May 14 12:30:20.065: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Running", Reason="", readiness=true. Elapsed: 6.079139375s May 14 12:30:22.084: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Running", Reason="", readiness=true. Elapsed: 8.098088633s May 14 12:30:24.103: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Running", Reason="", readiness=true. Elapsed: 10.116761485s ... skipping 2 lines ... May 14 12:30:30.160: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Running", Reason="", readiness=true. Elapsed: 16.173839652s May 14 12:30:32.178: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Running", Reason="", readiness=true. Elapsed: 18.191866252s May 14 12:30:34.197: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Running", Reason="", readiness=true. Elapsed: 20.211531645s May 14 12:30:36.215: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Running", Reason="", readiness=true. Elapsed: 22.229552575s May 14 12:30:38.236: INFO: Pod "pod-subpath-test-configmap-d8rl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.249939581s [1mSTEP[0m: Saw pod success May 14 12:30:38.236: INFO: Pod "pod-subpath-test-configmap-d8rl" satisfied condition "Succeeded or Failed" May 14 12:30:38.253: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-subpath-test-configmap-d8rl container test-container-subpath-configmap-d8rl: <nil> [1mSTEP[0m: delete the pod May 14 12:30:38.375: INFO: Waiting for pod pod-subpath-test-configmap-d8rl to disappear May 14 12:30:38.398: INFO: Pod pod-subpath-test-configmap-d8rl no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-configmap-d8rl May 14 12:30:38.398: INFO: Deleting pod "pod-subpath-test-configmap-d8rl" in namespace "subpath-3835" [AfterEach] [sig-storage] Subpath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:30:38.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "subpath-3835" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":335,"completed":6,"skipped":204,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould be able to deny pod and configmap creation [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 28 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:30:52.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-714" for this suite. [1mSTEP[0m: Destroying namespace "webhook-714-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":335,"completed":7,"skipped":251,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mwhen starting a container that exits[0m [1mshould run with the expected status [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Runtime ... skipping 21 lines ... [1mSTEP[0m: Container 'terminate-cmd-rpn': should get the expected 'State' [1mSTEP[0m: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:31:20.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-5178" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":335,"completed":8,"skipped":256,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1moptional updates should be reflected in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 15 lines ... [1mSTEP[0m: Creating configMap with name cm-test-opt-create-9bd35850-bc5d-424e-8297-74c5dde68215 [1mSTEP[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:31:25.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-9124" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":9,"skipped":258,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] EndpointSlice[0m [1mshould create Endpoints and EndpointSlices for Pods matching a Service [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] EndpointSlice ... skipping 13 lines ... [1mSTEP[0m: recreating EndpointSlices after they've been deleted May 14 12:31:51.198: INFO: EndpointSlice for Service endpointslice-1841/example-named-port not found [AfterEach] [sig-network] EndpointSlice /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:32:01.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-1841" for this suite. [32m•[0m{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":335,"completed":10,"skipped":288,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] InitContainer [NodeConformance][0m [1mshould not start app containers if init containers fail on a RestartAlways pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client May 14 12:32:01.291: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [1mSTEP[0m: Building a namespace api object, basename init-container [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: creating the pod May 14 12:32:01.416: INFO: PodSpec: initContainers in spec.initContainers May 14 12:32:48.027: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-68e51db0-febf-4add-87a5-bec6e2a27228", GenerateName:"", Namespace:"init-container-9747", SelfLink:"", UID:"13eeffeb-e70e-4dd7-af1a-776ae982044c", ResourceVersion:"3753", Generation:0, CreationTimestamp:time.Date(2022, time.May, 14, 12, 32, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"416692778"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"e349dc991e7c9529660443b6325aac845211f8c821294d193f0e6c2e062f43ed", "cni.projectcalico.org/podIP":"192.168.92.71/32", "cni.projectcalico.org/podIPs":"192.168.92.71/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.May, 14, 12, 32, 1, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002840030), Subresource:""}, v1.ManagedFieldsEntry{Manager:"Go-http-client", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.May, 14, 12, 32, 2, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002840060), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.May, 14, 12, 32, 2, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002840090), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-bl8kp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000896300), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-bl8kp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-bl8kp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-bl8kp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000dd4410), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"capz-05t52q-md-0-dxhn8", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00330e000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000dd4490)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000dd44b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000dd44b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000dd44bc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0032f4030), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.May, 14, 12, 32, 1, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.May, 14, 12, 32, 1, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.May, 14, 12, 32, 1, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.May, 14, 12, 32, 1, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.1.0.4", PodIP:"192.168.92.71", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.92.71"}}, StartTime:time.Date(2022, time.May, 14, 12, 32, 1, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00330e0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00330e150)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://6cdf04c71e9e15baac6cefc29f92cb3c6d3f1c5a3b277e60bb679b25181f13dd", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000896a60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000896980), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.6", ImageID:"", ContainerID:"", Started:(*bool)(0xc000dd451c)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:32:48.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-9747" for this suite. [32m•[0m{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":335,"completed":11,"skipped":336,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Deployment[0m [1mshould run the lifecycle of a Deployment [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Deployment ... skipping 92 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 May 14 12:33:20.359: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:33:20.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-1875" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":335,"completed":12,"skipped":362,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1mshould be consumable from pods in volume with mappings [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-f7a1beda-f965-41ba-928d-486df75d0098 [1mSTEP[0m: Creating a pod to test consume configMaps May 14 12:33:20.596: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-910d182b-356f-496a-9668-c35a6a9d64e0" in namespace "projected-2183" to be "Succeeded or Failed" May 14 12:33:20.621: INFO: Pod "pod-projected-configmaps-910d182b-356f-496a-9668-c35a6a9d64e0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.702601ms May 14 12:33:22.642: INFO: Pod "pod-projected-configmaps-910d182b-356f-496a-9668-c35a6a9d64e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045428611s May 14 12:33:24.661: INFO: Pod "pod-projected-configmaps-910d182b-356f-496a-9668-c35a6a9d64e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06442267s [1mSTEP[0m: Saw pod success May 14 12:33:24.661: INFO: Pod "pod-projected-configmaps-910d182b-356f-496a-9668-c35a6a9d64e0" satisfied condition "Succeeded or Failed" May 14 12:33:24.678: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-projected-configmaps-910d182b-356f-496a-9668-c35a6a9d64e0 container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 12:33:24.762: INFO: Waiting for pod pod-projected-configmaps-910d182b-356f-496a-9668-c35a6a9d64e0 to disappear May 14 12:33:24.780: INFO: Pod pod-projected-configmaps-910d182b-356f-496a-9668-c35a6a9d64e0 no longer exists [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:33:24.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-2183" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":335,"completed":13,"skipped":366,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl run pod[0m [1mshould create a pod from an image when restart is Never [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 20 lines ... May 14 12:33:28.191: INFO: stderr: "" May 14 12:33:28.191: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:33:28.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-7010" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":335,"completed":14,"skipped":380,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be consumable from pods in volume with mappings [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-map-5be607ad-701d-452b-a6af-c03f6311773b [1mSTEP[0m: Creating a pod to test consume secrets May 14 12:33:28.425: INFO: Waiting up to 5m0s for pod "pod-secrets-30dd7e8b-0cdb-429d-97b9-94189efcec5c" in namespace "secrets-6706" to be "Succeeded or Failed" May 14 12:33:28.448: INFO: Pod "pod-secrets-30dd7e8b-0cdb-429d-97b9-94189efcec5c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.908297ms May 14 12:33:30.467: INFO: Pod "pod-secrets-30dd7e8b-0cdb-429d-97b9-94189efcec5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.041548176s [1mSTEP[0m: Saw pod success May 14 12:33:30.467: INFO: Pod "pod-secrets-30dd7e8b-0cdb-429d-97b9-94189efcec5c" satisfied condition "Succeeded or Failed" May 14 12:33:30.484: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-secrets-30dd7e8b-0cdb-429d-97b9-94189efcec5c container secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 12:33:30.547: INFO: Waiting for pod pod-secrets-30dd7e8b-0cdb-429d-97b9-94189efcec5c to disappear May 14 12:33:30.564: INFO: Pod pod-secrets-30dd7e8b-0cdb-429d-97b9-94189efcec5c no longer exists [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:33:30.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-6706" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":335,"completed":15,"skipped":397,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicaSet[0m [1mReplace and Patch tests [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicaSet ... skipping 17 lines ... May 14 12:33:34.883: INFO: observed ReplicaSet test-rs in namespace replicaset-3659 with ReadyReplicas 2, AvailableReplicas 2 May 14 12:33:35.197: INFO: observed Replicaset test-rs in namespace replicaset-3659 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:33:35.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-3659" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":335,"completed":16,"skipped":402,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin][0m [90mSimple CustomResourceDefinition[0m [1mgetting/updating/patching custom resource definition status sub-resource works [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 May 14 12:33:35.369: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:33:36.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-1419" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":335,"completed":17,"skipped":406,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould be able to change the type from ClusterIP to ExternalName [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 25 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:33:45.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-8192" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":335,"completed":18,"skipped":410,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Downward API[0m [1mshould provide pod UID as env vars [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Downward API ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward api env vars May 14 12:33:45.282: INFO: Waiting up to 5m0s for pod "downward-api-a5c44fcb-df2e-4fad-bd76-e1c8ee27804b" in namespace "downward-api-1099" to be "Succeeded or Failed" May 14 12:33:45.312: INFO: Pod "downward-api-a5c44fcb-df2e-4fad-bd76-e1c8ee27804b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.863423ms May 14 12:33:47.331: INFO: Pod "downward-api-a5c44fcb-df2e-4fad-bd76-e1c8ee27804b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.048598859s [1mSTEP[0m: Saw pod success May 14 12:33:47.331: INFO: Pod "downward-api-a5c44fcb-df2e-4fad-bd76-e1c8ee27804b" satisfied condition "Succeeded or Failed" May 14 12:33:47.348: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod downward-api-a5c44fcb-df2e-4fad-bd76-e1c8ee27804b container dapi-container: <nil> [1mSTEP[0m: delete the pod May 14 12:33:47.434: INFO: Waiting for pod downward-api-a5c44fcb-df2e-4fad-bd76-e1c8ee27804b to disappear May 14 12:33:47.451: INFO: Pod downward-api-a5c44fcb-df2e-4fad-bd76-e1c8ee27804b no longer exists [AfterEach] [sig-node] Downward API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:33:47.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-1099" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":335,"completed":19,"skipped":420,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [90mWhen creating a pod with readOnlyRootFilesystem[0m [1mshould run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217[0m [BeforeEach] [sig-node] Security Context ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 May 14 12:33:47.642: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-815829ba-ae2e-4fcf-a4ee-c165adb3f50d" in namespace "security-context-test-5281" to be "Succeeded or Failed" May 14 12:33:47.660: INFO: Pod "busybox-readonly-true-815829ba-ae2e-4fcf-a4ee-c165adb3f50d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.481425ms May 14 12:33:49.678: INFO: Pod "busybox-readonly-true-815829ba-ae2e-4fcf-a4ee-c165adb3f50d": Phase="Failed", Reason="", readiness=false. Elapsed: 2.036018961s May 14 12:33:49.678: INFO: Pod "busybox-readonly-true-815829ba-ae2e-4fcf-a4ee-c165adb3f50d" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:33:49.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-5281" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":335,"completed":20,"skipped":473,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould unconditionally reject operations on fail closed webhook [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client May 14 12:33:49.725: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [1mSTEP[0m: Building a namespace api object, basename webhook ... skipping 6 lines ... [1mSTEP[0m: Deploying the webhook pod [1mSTEP[0m: Wait for the deployment to be ready May 14 12:33:50.404: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set [1mSTEP[0m: Deploying the webhook service [1mSTEP[0m: Verifying the service has paired with the endpoint May 14 12:33:53.501: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API [1mSTEP[0m: create a namespace for the webhook [1mSTEP[0m: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:33:53.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-6644" for this suite. [1mSTEP[0m: Destroying namespace "webhook-6644-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":335,"completed":21,"skipped":484,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] StatefulSet[0m [90mBasic StatefulSet functionality [StatefulSetBasic][0m [1mshould validate Statefulset Status endpoints [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] StatefulSet ... skipping 34 lines ... May 14 12:34:14.421: INFO: Waiting for statefulset status.replicas updated to 0 May 14 12:34:14.443: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:34:14.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "statefulset-5043" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":335,"completed":22,"skipped":489,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould be able to deny attaching pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 24 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:34:21.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-6264" for this suite. [1mSTEP[0m: Destroying namespace "webhook-6264-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":335,"completed":23,"skipped":499,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mwhen running a container with a new image[0m [1mshould not be able to pull image from invalid registry [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377[0m [BeforeEach] [sig-node] Container Runtime ... skipping 9 lines ... [1mSTEP[0m: check the container status [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:34:23.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-7915" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":335,"completed":24,"skipped":504,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin][0m [1mshould be able to convert from CR v1 to CR v2 [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] ... skipping 21 lines ... [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:34:30.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-webhook-1624" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":335,"completed":25,"skipped":513,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1mshould be consumable in multiple volumes in the same pod [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-744cc9fd-2b0a-4fe2-a342-3e4753450d13 [1mSTEP[0m: Creating a pod to test consume configMaps May 14 12:34:31.217: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-134040db-3e2c-41f9-b298-f54ec1465062" in namespace "projected-3075" to be "Succeeded or Failed" May 14 12:34:31.237: INFO: Pod "pod-projected-configmaps-134040db-3e2c-41f9-b298-f54ec1465062": Phase="Pending", Reason="", readiness=false. Elapsed: 20.650365ms May 14 12:34:33.257: INFO: Pod "pod-projected-configmaps-134040db-3e2c-41f9-b298-f54ec1465062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040005442s [1mSTEP[0m: Saw pod success May 14 12:34:33.257: INFO: Pod "pod-projected-configmaps-134040db-3e2c-41f9-b298-f54ec1465062" satisfied condition "Succeeded or Failed" May 14 12:34:33.274: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-projected-configmaps-134040db-3e2c-41f9-b298-f54ec1465062 container projected-configmap-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 12:34:33.339: INFO: Waiting for pod pod-projected-configmaps-134040db-3e2c-41f9-b298-f54ec1465062 to disappear May 14 12:34:33.356: INFO: Pod pod-projected-configmaps-134040db-3e2c-41f9-b298-f54ec1465062 no longer exists [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:34:33.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-3075" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":335,"completed":26,"skipped":554,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould mutate custom resource with different stored version [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 24 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:34:40.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-2435" for this suite. [1mSTEP[0m: Destroying namespace "webhook-2435-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":335,"completed":27,"skipped":562,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicaSet[0m [1mshould serve a basic image on each replica with a public image [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicaSet ... skipping 12 lines ... May 14 12:34:43.142: INFO: Trying to dial the pod May 14 12:34:48.206: INFO: Controller my-hostname-basic-2059573d-0a11-4ef4-b6d2-09d64e4e43a0: Got expected result from replica 1 [my-hostname-basic-2059573d-0a11-4ef4-b6d2-09d64e4e43a0-qzf5v]: "my-hostname-basic-2059573d-0a11-4ef4-b6d2-09d64e4e43a0-qzf5v", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:34:48.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-8092" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":335,"completed":28,"skipped":564,"failed":0} [90m------------------------------[0m [0m[sig-node] Security Context[0m [90mWhen creating a pod with readOnlyRootFilesystem[0m [1mshould run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Security Context ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 May 14 12:34:48.406: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-36559d0c-bfc5-4605-8d4d-c48af93911cc" in namespace "security-context-test-8919" to be "Succeeded or Failed" May 14 12:34:48.428: INFO: Pod "busybox-readonly-false-36559d0c-bfc5-4605-8d4d-c48af93911cc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.080147ms May 14 12:34:50.448: INFO: Pod "busybox-readonly-false-36559d0c-bfc5-4605-8d4d-c48af93911cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.042889635s May 14 12:34:50.449: INFO: Pod "busybox-readonly-false-36559d0c-bfc5-4605-8d4d-c48af93911cc" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:34:50.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-8919" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":335,"completed":29,"skipped":564,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Lifecycle Hook[0m [90mwhen create a pod with lifecycle hook[0m [1mshould execute poststart http hook properly [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Lifecycle Hook ... skipping 22 lines ... May 14 12:34:58.880: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 12:34:58.899: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:34:58.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-lifecycle-hook-2700" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":335,"completed":30,"skipped":566,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mon terminated container[0m [1mshould report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Runtime ... skipping 13 lines ... May 14 12:35:01.189: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:35:01.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-6180" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":335,"completed":31,"skipped":586,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0777 on node default medium May 14 12:35:01.459: INFO: Waiting up to 5m0s for pod "pod-ee1e909d-e637-49b9-96d3-525d87ab673e" in namespace "emptydir-5062" to be "Succeeded or Failed" May 14 12:35:01.478: INFO: Pod "pod-ee1e909d-e637-49b9-96d3-525d87ab673e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.988706ms May 14 12:35:03.497: INFO: Pod "pod-ee1e909d-e637-49b9-96d3-525d87ab673e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.037524079s [1mSTEP[0m: Saw pod success May 14 12:35:03.497: INFO: Pod "pod-ee1e909d-e637-49b9-96d3-525d87ab673e" satisfied condition "Succeeded or Failed" May 14 12:35:03.514: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-ee1e909d-e637-49b9-96d3-525d87ab673e container test-container: <nil> [1mSTEP[0m: delete the pod May 14 12:35:03.576: INFO: Waiting for pod pod-ee1e909d-e637-49b9-96d3-525d87ab673e to disappear May 14 12:35:03.594: INFO: Pod pod-ee1e909d-e637-49b9-96d3-525d87ab673e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:35:03.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-5062" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":32,"skipped":615,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould set mode on item file [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 12:35:03.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f046de47-888d-4632-b292-0312ba565455" in namespace "projected-3559" to be "Succeeded or Failed" May 14 12:35:03.812: INFO: Pod "downwardapi-volume-f046de47-888d-4632-b292-0312ba565455": Phase="Pending", Reason="", readiness=false. Elapsed: 30.660814ms May 14 12:35:05.831: INFO: Pod "downwardapi-volume-f046de47-888d-4632-b292-0312ba565455": Phase="Running", Reason="", readiness=true. Elapsed: 2.048809256s May 14 12:35:07.850: INFO: Pod "downwardapi-volume-f046de47-888d-4632-b292-0312ba565455": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068290611s [1mSTEP[0m: Saw pod success May 14 12:35:07.850: INFO: Pod "downwardapi-volume-f046de47-888d-4632-b292-0312ba565455" satisfied condition "Succeeded or Failed" May 14 12:35:07.867: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod downwardapi-volume-f046de47-888d-4632-b292-0312ba565455 container client-container: <nil> [1mSTEP[0m: delete the pod May 14 12:35:07.937: INFO: Waiting for pod downwardapi-volume-f046de47-888d-4632-b292-0312ba565455 to disappear May 14 12:35:07.954: INFO: Pod downwardapi-volume-f046de47-888d-4632-b292-0312ba565455 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:35:07.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-3559" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":33,"skipped":616,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl replace[0m [1mshould update a single-container pod's image [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 29 lines ... May 14 12:35:17.215: INFO: stderr: "" May 14 12:35:17.215: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:35:17.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-9761" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":335,"completed":34,"skipped":621,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] ResourceQuota[0m [1mshould create a ResourceQuota and capture the life of a pod. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] ResourceQuota ... skipping 17 lines ... [1mSTEP[0m: Deleting the pod [1mSTEP[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:35:30.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-36" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":335,"completed":35,"skipped":646,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mshould *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 13 lines ... May 14 12:35:32.920: INFO: Initial restart count of pod busybox-2b13f1c8-24ae-42f1-b85b-975889b0f691 is 0 [1mSTEP[0m: deleting the pod [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:39:33.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-probe-3631" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":335,"completed":36,"skipped":665,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium May 14 12:39:33.447: INFO: Waiting up to 5m0s for pod "pod-79962bcb-49d7-4705-a858-327c088f6780" in namespace "emptydir-3145" to be "Succeeded or Failed" May 14 12:39:33.476: INFO: Pod "pod-79962bcb-49d7-4705-a858-327c088f6780": Phase="Pending", Reason="", readiness=false. Elapsed: 29.496441ms May 14 12:39:35.495: INFO: Pod "pod-79962bcb-49d7-4705-a858-327c088f6780": Phase="Running", Reason="", readiness=true. Elapsed: 2.04756673s May 14 12:39:37.513: INFO: Pod "pod-79962bcb-49d7-4705-a858-327c088f6780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066117999s [1mSTEP[0m: Saw pod success May 14 12:39:37.513: INFO: Pod "pod-79962bcb-49d7-4705-a858-327c088f6780" satisfied condition "Succeeded or Failed" May 14 12:39:37.530: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-79962bcb-49d7-4705-a858-327c088f6780 container test-container: <nil> [1mSTEP[0m: delete the pod May 14 12:39:37.619: INFO: Waiting for pod pod-79962bcb-49d7-4705-a858-327c088f6780 to disappear May 14 12:39:37.636: INFO: Pod pod-79962bcb-49d7-4705-a858-327c088f6780 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:39:37.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-3145" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":37,"skipped":678,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Servers with support for Table transformation[0m [1mshould return a 406 for a backend which does not implement metadata [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation ... skipping 8 lines ... [It] should return a 406 for a backend which does not implement metadata [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:39:37.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "tables-9208" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":335,"completed":38,"skipped":687,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mvolume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir volume type on tmpfs May 14 12:39:38.015: INFO: Waiting up to 5m0s for pod "pod-23a24366-2096-4712-aff6-eec415853a6b" in namespace "emptydir-9729" to be "Succeeded or Failed" May 14 12:39:38.039: INFO: Pod "pod-23a24366-2096-4712-aff6-eec415853a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.380856ms May 14 12:39:40.058: INFO: Pod "pod-23a24366-2096-4712-aff6-eec415853a6b": Phase="Running", Reason="", readiness=true. Elapsed: 2.042821281s May 14 12:39:42.077: INFO: Pod "pod-23a24366-2096-4712-aff6-eec415853a6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061837721s [1mSTEP[0m: Saw pod success May 14 12:39:42.077: INFO: Pod "pod-23a24366-2096-4712-aff6-eec415853a6b" satisfied condition "Succeeded or Failed" May 14 12:39:42.095: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-23a24366-2096-4712-aff6-eec415853a6b container test-container: <nil> [1mSTEP[0m: delete the pod May 14 12:39:42.175: INFO: Waiting for pod pod-23a24366-2096-4712-aff6-eec415853a6b to disappear May 14 12:39:42.192: INFO: Pod pod-23a24366-2096-4712-aff6-eec415853a6b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:39:42.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-9729" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":39,"skipped":724,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Sysctls [LinuxOnly] [NodeConformance][0m [1mshould support sysctls [MinimumKubeletVersion:1.21] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] ... skipping 7 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod with the kernel.shm_rmid_forced sysctl [1mSTEP[0m: Watching for error events or started pod [1mSTEP[0m: Waiting for pod completion [1mSTEP[0m: Checking that the pod succeeded [1mSTEP[0m: Getting logs from the pod [1mSTEP[0m: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:39:44.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sysctl-8928" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":335,"completed":40,"skipped":727,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mon terminated container[0m [1mshould report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Runtime ... skipping 13 lines ... May 14 12:39:46.734: INFO: Expected: &{OK} to match Container's Termination Message: OK -- [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:39:46.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-9368" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":335,"completed":41,"skipped":753,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide DNS for pods for Subdomain [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 19 lines ... May 14 12:40:09.155: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:09.174: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:09.192: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:09.211: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:09.229: INFO: Unable to read jessie_udp@dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:09.247: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:09.248: INFO: Lookups using dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local wheezy_udp@dns-test-service-2.dns-458.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-458.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local jessie_udp@dns-test-service-2.dns-458.svc.cluster.local jessie_tcp@dns-test-service-2.dns-458.svc.cluster.local] May 14 12:40:14.268: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:14.287: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:14.305: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:14.324: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:14.343: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:14.362: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:14.384: INFO: Unable to read jessie_udp@dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:14.403: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:14.403: INFO: Lookups using dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local wheezy_udp@dns-test-service-2.dns-458.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-458.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-458.svc.cluster.local jessie_udp@dns-test-service-2.dns-458.svc.cluster.local jessie_tcp@dns-test-service-2.dns-458.svc.cluster.local] May 14 12:40:19.323: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-458.svc.cluster.local from pod dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275: the server could not find the requested resource (get pods dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275) May 14 12:40:19.395: INFO: Lookups using dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275 failed for: [wheezy_tcp@dns-test-service-2.dns-458.svc.cluster.local] May 14 12:40:24.407: INFO: DNS probes using dns-458/dns-test-dff5564c-253a-42e8-8b31-2c5ed4afe275 succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test headless service [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:40:24.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-458" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":335,"completed":42,"skipped":765,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] DisruptionController[0m [90mListing PodDisruptionBudgets for all namespaces[0m [1mshould list and delete a collection of PodDisruptionBudgets [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] DisruptionController ... skipping 26 lines ... May 14 12:40:25.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-2-5789" for this suite. [AfterEach] [sig-apps] DisruptionController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:40:25.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-1970" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":335,"completed":43,"skipped":774,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mon terminated container[0m [1mshould report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Runtime ... skipping 13 lines ... May 14 12:40:27.393: INFO: Expected: &{} to match Container's Termination Message: -- [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:40:27.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-1359" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":335,"completed":44,"skipped":783,"failed":0} [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl label[0m [1mshould update the label on a resource [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 48 lines ... May 14 12:40:32.454: INFO: stderr: "" May 14 12:40:32.454: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:40:32.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-9562" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":335,"completed":45,"skipped":783,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould provide container's cpu limit [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 12:40:32.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81a01135-1ae4-4990-b6ed-cfb25b8dfc32" in namespace "projected-1276" to be "Succeeded or Failed" May 14 12:40:32.697: INFO: Pod "downwardapi-volume-81a01135-1ae4-4990-b6ed-cfb25b8dfc32": Phase="Pending", Reason="", readiness=false. Elapsed: 20.649233ms May 14 12:40:34.717: INFO: Pod "downwardapi-volume-81a01135-1ae4-4990-b6ed-cfb25b8dfc32": Phase="Running", Reason="", readiness=true. Elapsed: 2.040322242s May 14 12:40:36.736: INFO: Pod "downwardapi-volume-81a01135-1ae4-4990-b6ed-cfb25b8dfc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059370724s [1mSTEP[0m: Saw pod success May 14 12:40:36.736: INFO: Pod "downwardapi-volume-81a01135-1ae4-4990-b6ed-cfb25b8dfc32" satisfied condition "Succeeded or Failed" May 14 12:40:36.753: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod downwardapi-volume-81a01135-1ae4-4990-b6ed-cfb25b8dfc32 container client-container: <nil> [1mSTEP[0m: delete the pod May 14 12:40:36.823: INFO: Waiting for pod downwardapi-volume-81a01135-1ae4-4990-b6ed-cfb25b8dfc32 to disappear May 14 12:40:36.845: INFO: Pod downwardapi-volume-81a01135-1ae4-4990-b6ed-cfb25b8dfc32 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:40:36.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-1276" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":335,"completed":46,"skipped":801,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] ConfigMap[0m [1mshould fail to create ConfigMap with empty key [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client May 14 12:40:36.899: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [1mSTEP[0m: Building a namespace api object, basename configmap [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap that has name configmap-test-emptyKey-cb2d861a-1027-4766-b767-16f33e411953 [AfterEach] [sig-node] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:40:37.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-2164" for this suite. [32m•[0m{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":335,"completed":47,"skipped":842,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould be able to change the type from NodePort to ExternalName [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 25 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:40:46.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-8981" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":335,"completed":48,"skipped":870,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 52 lines ... For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:40:58.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-3582" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":335,"completed":49,"skipped":885,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name configmap-test-volume-96318303-3215-49cd-97af-08f6d842a115 [1mSTEP[0m: Creating a pod to test consume configMaps May 14 12:40:59.142: INFO: Waiting up to 5m0s for pod "pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b" in namespace "configmap-1027" to be "Succeeded or Failed" May 14 12:40:59.159: INFO: Pod "pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.122163ms May 14 12:41:01.179: INFO: Pod "pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036312373s May 14 12:41:03.199: INFO: Pod "pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05727264s May 14 12:41:05.231: INFO: Pod "pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088895276s May 14 12:41:07.251: INFO: Pod "pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108657184s May 14 12:41:09.269: INFO: Pod "pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.127102824s May 14 12:41:11.291: INFO: Pod "pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.148966712s May 14 12:41:13.309: INFO: Pod "pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.167139425s [1mSTEP[0m: Saw pod success May 14 12:41:13.310: INFO: Pod "pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b" satisfied condition "Succeeded or Failed" May 14 12:41:13.327: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 12:41:13.394: INFO: Waiting for pod pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b to disappear May 14 12:41:13.411: INFO: Pod pod-configmaps-8931d194-c3af-4fc0-9a2b-575b427a362b no longer exists [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:41:13.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-1027" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":50,"skipped":901,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0644 on node default medium May 14 12:41:13.604: INFO: Waiting up to 5m0s for pod "pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5" in namespace "emptydir-5429" to be "Succeeded or Failed" May 14 12:41:13.629: INFO: Pod "pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.706372ms May 14 12:41:15.647: INFO: Pod "pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042969857s May 14 12:41:17.666: INFO: Pod "pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061480961s May 14 12:41:19.684: INFO: Pod "pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080324878s May 14 12:41:21.704: INFO: Pod "pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100252087s May 14 12:41:23.723: INFO: Pod "pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11876401s [1mSTEP[0m: Saw pod success May 14 12:41:23.723: INFO: Pod "pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5" satisfied condition "Succeeded or Failed" May 14 12:41:23.741: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5 container test-container: <nil> [1mSTEP[0m: delete the pod May 14 12:41:23.830: INFO: Waiting for pod pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5 to disappear May 14 12:41:23.847: INFO: Pod pod-7fcc084d-51d0-4461-a0c6-3129ee7864d5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:41:23.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-5429" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":51,"skipped":905,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] ResourceQuota[0m [1mshould verify ResourceQuota with best effort scope. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] ResourceQuota ... skipping 20 lines ... [1mSTEP[0m: Deleting the pod [1mSTEP[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:41:40.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-1709" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":335,"completed":52,"skipped":923,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium May 14 12:41:40.573: INFO: Waiting up to 5m0s for pod "pod-63efd075-5303-4601-adfb-57582d1a7c42" in namespace "emptydir-7958" to be "Succeeded or Failed" May 14 12:41:40.597: INFO: Pod "pod-63efd075-5303-4601-adfb-57582d1a7c42": Phase="Pending", Reason="", readiness=false. Elapsed: 24.06123ms May 14 12:41:42.616: INFO: Pod "pod-63efd075-5303-4601-adfb-57582d1a7c42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.042367792s [1mSTEP[0m: Saw pod success May 14 12:41:42.616: INFO: Pod "pod-63efd075-5303-4601-adfb-57582d1a7c42" satisfied condition "Succeeded or Failed" May 14 12:41:42.634: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-63efd075-5303-4601-adfb-57582d1a7c42 container test-container: <nil> [1mSTEP[0m: delete the pod May 14 12:41:42.717: INFO: Waiting for pod pod-63efd075-5303-4601-adfb-57582d1a7c42 to disappear May 14 12:41:42.737: INFO: Pod pod-63efd075-5303-4601-adfb-57582d1a7c42 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:41:42.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-7958" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":53,"skipped":924,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mupdates should be reflected in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 12 lines ... [1mSTEP[0m: Updating configmap configmap-test-upd-eb1e3b11-c03b-4cb6-bd57-a3b45a65110e [1mSTEP[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:41:47.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-8500" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":54,"skipped":964,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be consumable from pods in volume with mappings [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-601b1245-73de-49cc-ba38-f7f929f77e68 [1mSTEP[0m: Creating a pod to test consume configMaps May 14 12:41:47.342: INFO: Waiting up to 5m0s for pod "pod-configmaps-ce3ddbd4-60e0-4e88-8867-07a10cb7ba62" in namespace "configmap-5838" to be "Succeeded or Failed" May 14 12:41:47.366: INFO: Pod "pod-configmaps-ce3ddbd4-60e0-4e88-8867-07a10cb7ba62": Phase="Pending", Reason="", readiness=false. Elapsed: 24.110623ms May 14 12:41:49.385: INFO: Pod "pod-configmaps-ce3ddbd4-60e0-4e88-8867-07a10cb7ba62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.043507802s [1mSTEP[0m: Saw pod success May 14 12:41:49.386: INFO: Pod "pod-configmaps-ce3ddbd4-60e0-4e88-8867-07a10cb7ba62" satisfied condition "Succeeded or Failed" May 14 12:41:49.402: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-configmaps-ce3ddbd4-60e0-4e88-8867-07a10cb7ba62 container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 12:41:49.462: INFO: Waiting for pod pod-configmaps-ce3ddbd4-60e0-4e88-8867-07a10cb7ba62 to disappear May 14 12:41:49.483: INFO: Pod pod-configmaps-ce3ddbd4-60e0-4e88-8867-07a10cb7ba62 no longer exists [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:41:49.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-5838" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":335,"completed":55,"skipped":989,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1moptional updates should be reflected in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 16 lines ... [1mSTEP[0m: Creating configMap with name cm-test-opt-create-1968fc20-fff2-4646-84d3-fec25d661a7f [1mSTEP[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:43:17.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-5250" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":56,"skipped":1056,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-instrumentation] Events[0m [1mshould ensure that an event can be fetched, patched, deleted, and listed [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-instrumentation] Events ... skipping 12 lines ... [1mSTEP[0m: deleting the test event [1mSTEP[0m: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:43:17.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-5721" for this suite. [32m•[0m{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":335,"completed":57,"skipped":1132,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs May 14 12:43:17.587: INFO: Waiting up to 5m0s for pod "pod-577e284d-fa44-41a0-bbb3-248732e8bbd1" in namespace "emptydir-1521" to be "Succeeded or Failed" May 14 12:43:17.606: INFO: Pod "pod-577e284d-fa44-41a0-bbb3-248732e8bbd1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.153255ms May 14 12:43:19.625: INFO: Pod "pod-577e284d-fa44-41a0-bbb3-248732e8bbd1": Phase="Running", Reason="", readiness=true. Elapsed: 2.038846874s May 14 12:43:21.644: INFO: Pod "pod-577e284d-fa44-41a0-bbb3-248732e8bbd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057338288s [1mSTEP[0m: Saw pod success May 14 12:43:21.644: INFO: Pod "pod-577e284d-fa44-41a0-bbb3-248732e8bbd1" satisfied condition "Succeeded or Failed" May 14 12:43:21.661: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-577e284d-fa44-41a0-bbb3-248732e8bbd1 container test-container: <nil> [1mSTEP[0m: delete the pod May 14 12:43:21.735: INFO: Waiting for pod pod-577e284d-fa44-41a0-bbb3-248732e8bbd1 to disappear May 14 12:43:21.752: INFO: Pod pod-577e284d-fa44-41a0-bbb3-248732e8bbd1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:43:21.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-1521" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":58,"skipped":1136,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl cluster-info[0m [1mshould check if Kubernetes control plane services is included in cluster-info [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 12 lines ... May 14 12:43:22.102: INFO: stderr: "" May 14 12:43:22.102: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:43:22.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-5635" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":335,"completed":59,"skipped":1165,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould update labels on modification [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 12 lines ... May 14 12:43:24.337: INFO: The status of Pod labelsupdate610d1313-e141-4c49-82c2-7b4edaa720de is Running (Ready = true) May 14 12:43:24.924: INFO: Successfully updated pod "labelsupdate610d1313-e141-4c49-82c2-7b4edaa720de" [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:43:29.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-1059" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":335,"completed":60,"skipped":1182,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Secrets[0m [1mshould fail to create secret due to empty secret key [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client May 14 12:43:29.051: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [1mSTEP[0m: Building a namespace api object, basename secrets [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating projection with secret that has name secret-emptykey-test-a10c79a1-5d6f-44aa-bb89-6dfcb067dda4 [AfterEach] [sig-node] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:43:29.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-2277" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":335,"completed":61,"skipped":1187,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicaSet[0m [1mshould list and delete a collection of ReplicaSets [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicaSet ... skipping 15 lines ... [1mSTEP[0m: DeleteCollection of the ReplicaSets [1mSTEP[0m: After DeleteCollection verify that ReplicaSets have been deleted [AfterEach] [sig-apps] ReplicaSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:43:34.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-9618" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":335,"completed":62,"skipped":1245,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould be able to deny custom resource creation, update and deletion [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 27 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:43:41.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-9276" for this suite. [1mSTEP[0m: Destroying namespace "webhook-9276-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":335,"completed":63,"skipped":1248,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Networking[0m [90mGranular Checks: Pods[0m [1mshould function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Networking ... skipping 36 lines ... May 14 12:44:05.932: INFO: ExecWithOptions: execute(POST https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/pod-network-test-6341/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.204.133+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) May 14 12:44:07.189: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:44:07.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pod-network-test-6341" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":64,"skipped":1261,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould mutate pod and apply defaults after mutation [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 21 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:44:11.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-7946" for this suite. [1mSTEP[0m: Destroying namespace "webhook-7946-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":335,"completed":65,"skipped":1264,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] ResourceQuota[0m [1mshould create a ResourceQuota and capture the life of a configMap. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] ResourceQuota ... skipping 13 lines ... [1mSTEP[0m: Deleting a ConfigMap [1mSTEP[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:44:39.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-8352" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":335,"completed":66,"skipped":1322,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] RuntimeClass[0m [1m should support RuntimeClasses API operations [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] RuntimeClass ... skipping 19 lines ... [1mSTEP[0m: deleting [1mSTEP[0m: deleting a collection [AfterEach] [sig-node] RuntimeClass /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:44:40.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "runtimeclass-8550" for this suite. [32m•[0m{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":335,"completed":67,"skipped":1332,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Subpath[0m [90mAtomic writer volumes[0m [1mshould support subpaths with projected pod [Excluded:WindowsDocker] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Subpath ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating pod pod-subpath-test-projected-cmb2 [1mSTEP[0m: Creating a pod to test atomic-volume-subpath May 14 12:44:40.674: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cmb2" in namespace "subpath-4800" to be "Succeeded or Failed" May 14 12:44:40.696: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.800328ms May 14 12:44:42.715: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Running", Reason="", readiness=true. Elapsed: 2.040933239s May 14 12:44:44.734: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Running", Reason="", readiness=true. Elapsed: 4.059682066s May 14 12:44:46.754: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Running", Reason="", readiness=true. Elapsed: 6.07971235s May 14 12:44:48.772: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Running", Reason="", readiness=true. Elapsed: 8.098166263s May 14 12:44:50.791: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Running", Reason="", readiness=true. Elapsed: 10.117291901s ... skipping 2 lines ... May 14 12:44:56.852: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Running", Reason="", readiness=true. Elapsed: 16.17801252s May 14 12:44:58.872: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Running", Reason="", readiness=true. Elapsed: 18.19765896s May 14 12:45:00.890: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Running", Reason="", readiness=true. Elapsed: 20.216137206s May 14 12:45:02.910: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Running", Reason="", readiness=true. Elapsed: 22.235771279s May 14 12:45:04.929: INFO: Pod "pod-subpath-test-projected-cmb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.254727255s [1mSTEP[0m: Saw pod success May 14 12:45:04.929: INFO: Pod "pod-subpath-test-projected-cmb2" satisfied condition "Succeeded or Failed" May 14 12:45:04.946: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-subpath-test-projected-cmb2 container test-container-subpath-projected-cmb2: <nil> [1mSTEP[0m: delete the pod May 14 12:45:05.038: INFO: Waiting for pod pod-subpath-test-projected-cmb2 to disappear May 14 12:45:05.058: INFO: Pod pod-subpath-test-projected-cmb2 no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-projected-cmb2 May 14 12:45:05.058: INFO: Deleting pod "pod-subpath-test-projected-cmb2" in namespace "subpath-4800" [AfterEach] [sig-storage] Subpath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:45:05.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "subpath-4800" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":335,"completed":68,"skipped":1378,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mlisting validating webhooks should work [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 23 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:45:09.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-6281" for this suite. [1mSTEP[0m: Destroying namespace "webhook-6281-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":335,"completed":69,"skipped":1380,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Networking[0m [90mGranular Checks: Pods[0m [1mshould function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Networking ... skipping 36 lines ... May 14 12:45:32.428: INFO: ExecWithOptions: execute(POST https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/pod-network-test-1878/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.204.135%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) May 14 12:45:32.687: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:45:32.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pod-network-test-1878" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":70,"skipped":1395,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] StatefulSet[0m [90mBasic StatefulSet functionality [StatefulSetBasic][0m [1mshould have a working scale subresource [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] StatefulSet ... skipping 25 lines ... May 14 12:45:53.240: INFO: Waiting for statefulset status.replicas updated to 0 May 14 12:45:53.257: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:45:53.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "statefulset-2216" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":335,"completed":71,"skipped":1406,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mupdates the published spec when one version gets renamed [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 12 lines ... [1mSTEP[0m: check the old version name is removed [1mSTEP[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:46:17.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-1240" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":335,"completed":72,"skipped":1420,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Kubelet[0m [90mwhen scheduling a busybox command that always fails in a pod[0m [1mshould be possible to delete [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Kubelet ... skipping 10 lines ... [It] should be possible to delete [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:46:17.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-9657" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":335,"completed":73,"skipped":1460,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mworks for multiple CRDs of same group but different versions [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 11 lines ... May 14 12:46:34.288: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig May 14 12:46:37.802: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:46:52.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-4752" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":335,"completed":74,"skipped":1484,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin][0m [1mshould be able to convert a non homogeneous list of CRs [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] ... skipping 24 lines ... [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:47:01.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-webhook-8901" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":335,"completed":75,"skipped":1493,"failed":0} [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mwhen running a container with a new image[0m [1mshould be able to pull image [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382[0m [BeforeEach] [sig-node] Container Runtime ... skipping 9 lines ... [1mSTEP[0m: check the container status [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:47:04.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-7251" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":335,"completed":76,"skipped":1493,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] InitContainer [NodeConformance][0m [1mshould invoke init containers on a RestartAlways pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] ... skipping 10 lines ... [1mSTEP[0m: creating the pod May 14 12:47:04.615: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:47:08.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-5149" for this suite. [32m•[0m{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":335,"completed":77,"skipped":1496,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould not be blocked by dependency circle [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 9 lines ... May 14 12:47:08.620: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"43f95cb4-e3ee-4f5f-a14b-d148584c23e2", Controller:(*bool)(0xc0048a0086), BlockOwnerDeletion:(*bool)(0xc0048a0087)}} May 14 12:47:08.643: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"397901e8-9925-4a36-bc8c-42ed92147bbc", Controller:(*bool)(0xc0004e525a), BlockOwnerDeletion:(*bool)(0xc0004e525b)}} [AfterEach] [sig-api-machinery] Garbage collector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:47:13.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-5224" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":335,"completed":78,"skipped":1510,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin][0m [90mSimple CustomResourceDefinition[0m [1mlisting custom resource definition objects works [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 May 14 12:47:13.847: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:47:20.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-7427" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":335,"completed":79,"skipped":1540,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mwith readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 8 lines ... [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:48:21.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-probe-9870" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":335,"completed":80,"skipped":1541,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-scheduling] LimitRange[0m [1mshould create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-scheduling] LimitRange ... skipping 32 lines ... May 14 12:48:28.484: INFO: limitRange is already deleted [1mSTEP[0m: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:48:28.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "limitrange-5157" for this suite. [32m•[0m{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":335,"completed":81,"skipped":1544,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs May 14 12:48:28.692: INFO: Waiting up to 5m0s for pod "pod-8516a2b0-7404-43ac-b852-278598e4e45a" in namespace "emptydir-2449" to be "Succeeded or Failed" May 14 12:48:28.711: INFO: Pod "pod-8516a2b0-7404-43ac-b852-278598e4e45a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.279513ms May 14 12:48:30.729: INFO: Pod "pod-8516a2b0-7404-43ac-b852-278598e4e45a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.03655076s [1mSTEP[0m: Saw pod success May 14 12:48:30.729: INFO: Pod "pod-8516a2b0-7404-43ac-b852-278598e4e45a" satisfied condition "Succeeded or Failed" May 14 12:48:30.744: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-8516a2b0-7404-43ac-b852-278598e4e45a container test-container: <nil> [1mSTEP[0m: delete the pod May 14 12:48:30.811: INFO: Waiting for pod pod-8516a2b0-7404-43ac-b852-278598e4e45a to disappear May 14 12:48:30.826: INFO: Pod pod-8516a2b0-7404-43ac-b852-278598e4e45a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:48:30.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-2449" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":82,"skipped":1590,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mUpdate Demo[0m [1mshould create and stop a replication controller [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 61 lines ... May 14 12:48:38.846: INFO: stderr: "" May 14 12:48:38.846: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:48:38.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-8958" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":335,"completed":83,"skipped":1591,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [90mwhen creating containers with AllowPrivilegeEscalation[0m [1mshould not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Security Context ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 May 14 12:48:39.027: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0d0177d1-6185-4eea-8f98-6e659e4ccdb1" in namespace "security-context-test-926" to be "Succeeded or Failed" May 14 12:48:39.046: INFO: Pod "alpine-nnp-false-0d0177d1-6185-4eea-8f98-6e659e4ccdb1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.810962ms May 14 12:48:41.062: INFO: Pod "alpine-nnp-false-0d0177d1-6185-4eea-8f98-6e659e4ccdb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03461682s May 14 12:48:43.077: INFO: Pod "alpine-nnp-false-0d0177d1-6185-4eea-8f98-6e659e4ccdb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050372411s May 14 12:48:45.101: INFO: Pod "alpine-nnp-false-0d0177d1-6185-4eea-8f98-6e659e4ccdb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074468231s May 14 12:48:45.101: INFO: Pod "alpine-nnp-false-0d0177d1-6185-4eea-8f98-6e659e4ccdb1" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:48:45.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-926" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":84,"skipped":1615,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicaSet[0m [1mshould validate Replicaset Status endpoints [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicaSet ... skipping 33 lines ... May 14 12:48:47.502: INFO: Found replicaset test-rs in namespace replicaset-641 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } May 14 12:48:47.502: INFO: Replicaset test-rs has a patched status [AfterEach] [sig-apps] ReplicaSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:48:47.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-641" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":335,"completed":85,"skipped":1630,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mwhen running a container with a new image[0m [1mshould be able to pull from private registry with secret [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393[0m [BeforeEach] [sig-node] Container Runtime ... skipping 10 lines ... [1mSTEP[0m: check the container status [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:48:52.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-1138" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":335,"completed":86,"skipped":1653,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Downward API[0m [1mshould provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Downward API ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward api env vars May 14 12:48:53.071: INFO: Waiting up to 5m0s for pod "downward-api-c304c5bf-c7e4-46a6-a549-3c39dc19355a" in namespace "downward-api-1127" to be "Succeeded or Failed" May 14 12:48:53.096: INFO: Pod "downward-api-c304c5bf-c7e4-46a6-a549-3c39dc19355a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.482149ms May 14 12:48:55.115: INFO: Pod "downward-api-c304c5bf-c7e4-46a6-a549-3c39dc19355a": Phase="Running", Reason="", readiness=true. Elapsed: 2.044417744s May 14 12:48:57.131: INFO: Pod "downward-api-c304c5bf-c7e4-46a6-a549-3c39dc19355a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060825444s [1mSTEP[0m: Saw pod success May 14 12:48:57.131: INFO: Pod "downward-api-c304c5bf-c7e4-46a6-a549-3c39dc19355a" satisfied condition "Succeeded or Failed" May 14 12:48:57.147: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod downward-api-c304c5bf-c7e4-46a6-a549-3c39dc19355a container dapi-container: <nil> [1mSTEP[0m: delete the pod May 14 12:48:57.215: INFO: Waiting for pod downward-api-c304c5bf-c7e4-46a6-a549-3c39dc19355a to disappear May 14 12:48:57.235: INFO: Pod downward-api-c304c5bf-c7e4-46a6-a549-3c39dc19355a no longer exists [AfterEach] [sig-node] Downward API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:48:57.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-1127" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":335,"completed":87,"skipped":1655,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-auth] Certificates API [Privileged:ClusterAdmin][0m [1mshould support CSR API operations [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] ... skipping 26 lines ... [1mSTEP[0m: deleting [1mSTEP[0m: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:48:58.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "certificates-8335" for this suite. [32m•[0m{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":335,"completed":88,"skipped":1724,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl describe[0m [1mshould check if kubectl describe prints relevant information for rc and pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 23 lines ... May 14 12:49:02.279: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 12:49:02.279: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-2455 describe pod agnhost-primary-dw5vh' May 14 12:49:02.502: INFO: stderr: "" May 14 12:49:02.502: INFO: stdout: "Name: agnhost-primary-dw5vh\nNamespace: kubectl-2455\nPriority: 0\nNode: capz-05t52q-md-0-scnhc/10.1.0.5\nStart Time: Sat, 14 May 2022 12:48:59 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: cni.projectcalico.org/containerID: ffb58ac2e9a528394e594e82650be39391d4a26c34e6d5bba61bf9a318268797\n cni.projectcalico.org/podIP: 192.168.204.147/32\n cni.projectcalico.org/podIPs: 192.168.204.147/32\nStatus: Running\nIP: 192.168.204.147\nIPs:\n IP: 192.168.204.147\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://75baa761a5a3f2d5403dcad559d9aa803933ad947def4f3bff2c7f1127d90161\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 14 May 2022 12:49:01 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mgmqt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-mgmqt:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-2455/agnhost-primary-dw5vh to capz-05t52q-md-0-scnhc\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.33\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" May 14 12:49:02.503: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-2455 describe rc agnhost-primary' May 14 12:49:02.692: INFO: stderr: "" May 14 12:49:02.692: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2455\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-dw5vh\n" May 14 12:49:02.692: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-2455 describe service agnhost-primary' May 14 12:49:02.920: INFO: stderr: "" May 14 12:49:02.920: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2455\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.108.229.213\nIPs: 10.108.229.213\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.204.147:6379\nSession Affinity: None\nEvents: <none>\n" May 14 12:49:02.942: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-2455 describe node capz-05t52q-control-plane-5t9v7' May 14 12:49:03.162: INFO: stderr: "" May 14 12:49:03.163: INFO: stdout: "Name: capz-05t52q-control-plane-5t9v7\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=Standard_D2s_v3\n beta.kubernetes.io/os=linux\n failure-domain.beta.kubernetes.io/region=northcentralus\n failure-domain.beta.kubernetes.io/zone=2\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=capz-05t52q-control-plane-5t9v7\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\n node.kubernetes.io/instance-type=Standard_D2s_v3\n topology.kubernetes.io/region=northcentralus\n topology.kubernetes.io/zone=2\nAnnotations: cluster.x-k8s.io/cluster-name: capz-05t52q\n cluster.x-k8s.io/cluster-namespace: default\n cluster.x-k8s.io/machine: capz-05t52q-control-plane-rgw9v\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: capz-05t52q-control-plane\n kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n projectcalico.org/IPv4Address: 10.0.0.6/16\n projectcalico.org/IPv4VXLANTunnelAddr: 192.168.135.64\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 14 May 2022 12:22:14 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: capz-05t52q-control-plane-5t9v7\n AcquireTime: <unset>\n RenewTime: Sat, 14 May 2022 12:48:56 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 14 May 2022 12:23:09 +0000 Sat, 14 May 2022 12:23:09 +0000 RouteCreated RouteController created a route\n MemoryPressure False Sat, 14 May 2022 12:48:49 +0000 Sat, 14 May 2022 12:22:14 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 14 May 2022 12:48:49 +0000 Sat, 14 May 2022 12:22:14 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 14 May 2022 12:48:49 +0000 Sat, 14 May 2022 12:22:14 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 14 May 2022 12:48:49 +0000 Sat, 14 May 2022 12:22:46 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.0.0.6\n Hostname: capz-05t52q-control-plane-5t9v7\nCapacity:\n cpu: 2\n ephemeral-storage: 129900528Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8145332Ki\n pods: 110\nAllocatable:\n cpu: 2\n ephemeral-storage: 119716326407\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 8042932Ki\n pods: 110\nSystem Info:\n Machine ID: bfb80603d2894bec814db4af387324d2\n System UUID: 36b4108d-881c-6343-8dbd-f5c4115abc23\n Boot ID: 5eba3492-2152-4604-b786-2ad84921acd0\n Kernel Version: 5.13.0-1017-azure\n OS Image: Ubuntu 20.04.4 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.1\n Kubelet Version: v1.23.5\n Kube-Proxy Version: v1.23.5\nPodCIDR: 10.244.4.0/24\nPodCIDRs: 10.244.4.0/24\nProviderID: azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-05t52q/providers/Microsoft.Compute/virtualMachines/capz-05t52q-control-plane-5t9v7\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system calico-node-js2lw 250m (12%) 0 (0%) 0 (0%) 0 (0%) 26m\n kube-system cloud-node-manager-7vhmp 50m (2%) 2 (100%) 50Mi (0%) 512Mi (6%) 26m\n kube-system etcd-capz-05t52q-control-plane-5t9v7 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 26m\n kube-system kube-apiserver-capz-05t52q-control-plane-5t9v7 250m (12%) 0 (0%) 0 (0%) 0 (0%) 26m\n kube-system kube-controller-manager-capz-05t52q-control-plane-5t9v7 200m (10%) 0 (0%) 0 (0%) 0 (0%) 26m\n kube-system kube-proxy-vvkrt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26m\n kube-system kube-scheduler-capz-05t52q-control-plane-5t9v7 100m (5%) 0 (0%) 0 (0%) 0 (0%) 26m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 950m (47%) 2 (100%)\n memory 150Mi (1%) 512Mi (6%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 26m kube-proxy \n Warning InvalidDiskCapacity 26m kubelet invalid capacity 0 on image filesystem\n Normal NodeHasSufficientMemory 26m kubelet Node capz-05t52q-control-plane-5t9v7 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 26m kubelet Node capz-05t52q-control-plane-5t9v7 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 26m kubelet Node capz-05t52q-control-plane-5t9v7 status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 26m kubelet Updated Node Allocatable limit across pods\n Normal Starting 26m kubelet Starting kubelet.\n Normal NodeReady 26m kubelet Node capz-05t52q-control-plane-5t9v7 status is now: NodeReady\n" May 14 12:49:03.163: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-2455 describe namespace kubectl-2455' May 14 12:49:03.342: INFO: stderr: "" May 14 12:49:03.342: INFO: stdout: "Name: kubectl-2455\nLabels: e2e-framework=kubectl\n e2e-run=88dfefc3-8fe9-4db9-bedc-b35ebc39ca37\n kubernetes.io/metadata.name=kubectl-2455\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:03.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-2455" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":335,"completed":89,"skipped":1745,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] StatefulSet[0m [90mBasic StatefulSet functionality [StatefulSetBasic][0m [1mShould recreate evicted statefulset [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] StatefulSet ... skipping 13 lines ... [1mSTEP[0m: Looking for a node to schedule stateful set and pod [1mSTEP[0m: Creating pod with conflicting port in namespace statefulset-3284 [1mSTEP[0m: Waiting until pod test-pod will start running in namespace statefulset-3284 [1mSTEP[0m: Creating statefulset with conflicting port in namespace statefulset-3284 [1mSTEP[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3284 May 14 12:49:07.659: INFO: Observed stateful pod in namespace: statefulset-3284, name: ss-0, uid: 555d5a0e-0bfb-438a-abf2-18ed42b27578, status phase: Pending. Waiting for statefulset controller to delete. May 14 12:49:07.699: INFO: Observed stateful pod in namespace: statefulset-3284, name: ss-0, uid: 555d5a0e-0bfb-438a-abf2-18ed42b27578, status phase: Failed. Waiting for statefulset controller to delete. May 14 12:49:07.952: INFO: Observed stateful pod in namespace: statefulset-3284, name: ss-0, uid: 555d5a0e-0bfb-438a-abf2-18ed42b27578, status phase: Failed. Waiting for statefulset controller to delete. May 14 12:49:07.967: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3284 [1mSTEP[0m: Removing pod with conflicting port in namespace statefulset-3284 [1mSTEP[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3284 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 May 14 12:49:10.055: INFO: Deleting all statefulset in ns statefulset-3284 May 14 12:49:10.073: INFO: Scaling statefulset ss to 0 May 14 12:49:20.149: INFO: Waiting for statefulset status.replicas updated to 0 May 14 12:49:20.164: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:20.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "statefulset-3284" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":335,"completed":90,"skipped":1799,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould provide secure master service [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 10 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:20.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-3800" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":335,"completed":91,"skipped":1826,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected secret[0m [1mshould be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected secret ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-map-b6d60523-2a1c-40c8-9749-4a453b5631c8 [1mSTEP[0m: Creating a pod to test consume secrets May 14 12:49:20.588: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-06f558b2-b0d1-4277-b443-52949ea1446a" in namespace "projected-8697" to be "Succeeded or Failed" May 14 12:49:20.605: INFO: Pod "pod-projected-secrets-06f558b2-b0d1-4277-b443-52949ea1446a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.106887ms May 14 12:49:22.622: INFO: Pod "pod-projected-secrets-06f558b2-b0d1-4277-b443-52949ea1446a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033881513s May 14 12:49:24.639: INFO: Pod "pod-projected-secrets-06f558b2-b0d1-4277-b443-52949ea1446a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051153992s [1mSTEP[0m: Saw pod success May 14 12:49:24.639: INFO: Pod "pod-projected-secrets-06f558b2-b0d1-4277-b443-52949ea1446a" satisfied condition "Succeeded or Failed" May 14 12:49:24.655: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-projected-secrets-06f558b2-b0d1-4277-b443-52949ea1446a container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 12:49:24.711: INFO: Waiting for pod pod-projected-secrets-06f558b2-b0d1-4277-b443-52949ea1446a to disappear May 14 12:49:24.727: INFO: Pod pod-projected-secrets-06f558b2-b0d1-4277-b443-52949ea1446a no longer exists [AfterEach] [sig-storage] Projected secret /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:24.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-8697" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":92,"skipped":1829,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicationController[0m [1mshould test the lifecycle of a ReplicationController [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicationController ... skipping 27 lines ... [1mSTEP[0m: deleting ReplicationControllers by collection [1mSTEP[0m: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:28.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-1901" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":335,"completed":93,"skipped":1861,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould provide podname only [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 12:49:28.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a879b463-424c-4873-a19a-3b8243e4ca22" in namespace "downward-api-1991" to be "Succeeded or Failed" May 14 12:49:28.513: INFO: Pod "downwardapi-volume-a879b463-424c-4873-a19a-3b8243e4ca22": Phase="Pending", Reason="", readiness=false. Elapsed: 20.391673ms May 14 12:49:30.530: INFO: Pod "downwardapi-volume-a879b463-424c-4873-a19a-3b8243e4ca22": Phase="Running", Reason="", readiness=true. Elapsed: 2.037806869s May 14 12:49:32.547: INFO: Pod "downwardapi-volume-a879b463-424c-4873-a19a-3b8243e4ca22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055111112s [1mSTEP[0m: Saw pod success May 14 12:49:32.548: INFO: Pod "downwardapi-volume-a879b463-424c-4873-a19a-3b8243e4ca22" satisfied condition "Succeeded or Failed" May 14 12:49:32.562: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod downwardapi-volume-a879b463-424c-4873-a19a-3b8243e4ca22 container client-container: <nil> [1mSTEP[0m: delete the pod May 14 12:49:32.618: INFO: Waiting for pod downwardapi-volume-a879b463-424c-4873-a19a-3b8243e4ca22 to disappear May 14 12:49:32.634: INFO: Pod downwardapi-volume-a879b463-424c-4873-a19a-3b8243e4ca22 no longer exists [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:32.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-1991" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":335,"completed":94,"skipped":1867,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] EndpointSlice[0m [1mshould have Endpoints and EndpointSlices pointing to API Server [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] EndpointSlice ... skipping 10 lines ... May 14 12:49:32.836: INFO: Endpoints addresses: [10.0.0.4 10.0.0.5 10.0.0.6] , ports: [6443] May 14 12:49:32.836: INFO: EndpointSlices addresses: [10.0.0.4 10.0.0.5 10.0.0.6] , ports: [6443] [AfterEach] [sig-network] EndpointSlice /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:32.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-8972" for this suite. [32m•[0m{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":335,"completed":95,"skipped":1894,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-map-746553fc-f535-4da8-abf2-5c0d933c1d21 [1mSTEP[0m: Creating a pod to test consume secrets May 14 12:49:33.026: INFO: Waiting up to 5m0s for pod "pod-secrets-b7eeec08-985f-48c5-99c3-60eeef9770a3" in namespace "secrets-7640" to be "Succeeded or Failed" May 14 12:49:33.048: INFO: Pod "pod-secrets-b7eeec08-985f-48c5-99c3-60eeef9770a3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.092196ms May 14 12:49:35.076: INFO: Pod "pod-secrets-b7eeec08-985f-48c5-99c3-60eeef9770a3": Phase="Running", Reason="", readiness=true. Elapsed: 2.049309755s May 14 12:49:37.093: INFO: Pod "pod-secrets-b7eeec08-985f-48c5-99c3-60eeef9770a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066689262s [1mSTEP[0m: Saw pod success May 14 12:49:37.093: INFO: Pod "pod-secrets-b7eeec08-985f-48c5-99c3-60eeef9770a3" satisfied condition "Succeeded or Failed" May 14 12:49:37.108: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-secrets-b7eeec08-985f-48c5-99c3-60eeef9770a3 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 12:49:37.159: INFO: Waiting for pod pod-secrets-b7eeec08-985f-48c5-99c3-60eeef9770a3 to disappear May 14 12:49:37.175: INFO: Pod pod-secrets-b7eeec08-985f-48c5-99c3-60eeef9770a3 no longer exists [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:37.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-7640" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":96,"skipped":1908,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould provide container's cpu request [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 12:49:37.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71d5f4d1-e551-456a-a6ac-201f9c01fcea" in namespace "projected-9952" to be "Succeeded or Failed" May 14 12:49:37.362: INFO: Pod "downwardapi-volume-71d5f4d1-e551-456a-a6ac-201f9c01fcea": Phase="Pending", Reason="", readiness=false. Elapsed: 18.134208ms May 14 12:49:39.379: INFO: Pod "downwardapi-volume-71d5f4d1-e551-456a-a6ac-201f9c01fcea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034633959s [1mSTEP[0m: Saw pod success May 14 12:49:39.379: INFO: Pod "downwardapi-volume-71d5f4d1-e551-456a-a6ac-201f9c01fcea" satisfied condition "Succeeded or Failed" May 14 12:49:39.395: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod downwardapi-volume-71d5f4d1-e551-456a-a6ac-201f9c01fcea container client-container: <nil> [1mSTEP[0m: delete the pod May 14 12:49:39.446: INFO: Waiting for pod downwardapi-volume-71d5f4d1-e551-456a-a6ac-201f9c01fcea to disappear May 14 12:49:39.462: INFO: Pod downwardapi-volume-71d5f4d1-e551-456a-a6ac-201f9c01fcea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:39.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-9952" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":335,"completed":97,"skipped":1918,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-e8646685-7abd-4dd0-b404-e42c4ee5a886 [1mSTEP[0m: Creating a pod to test consume secrets May 14 12:49:39.664: INFO: Waiting up to 5m0s for pod "pod-secrets-7e7461c6-e3e7-4b80-b16c-1ed8851b7141" in namespace "secrets-1775" to be "Succeeded or Failed" May 14 12:49:39.686: INFO: Pod "pod-secrets-7e7461c6-e3e7-4b80-b16c-1ed8851b7141": Phase="Pending", Reason="", readiness=false. Elapsed: 22.095001ms May 14 12:49:41.703: INFO: Pod "pod-secrets-7e7461c6-e3e7-4b80-b16c-1ed8851b7141": Phase="Running", Reason="", readiness=true. Elapsed: 2.038403392s May 14 12:49:43.720: INFO: Pod "pod-secrets-7e7461c6-e3e7-4b80-b16c-1ed8851b7141": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055460096s [1mSTEP[0m: Saw pod success May 14 12:49:43.720: INFO: Pod "pod-secrets-7e7461c6-e3e7-4b80-b16c-1ed8851b7141" satisfied condition "Succeeded or Failed" May 14 12:49:43.735: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-secrets-7e7461c6-e3e7-4b80-b16c-1ed8851b7141 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 12:49:43.790: INFO: Waiting for pod pod-secrets-7e7461c6-e3e7-4b80-b16c-1ed8851b7141 to disappear May 14 12:49:43.807: INFO: Pod pod-secrets-7e7461c6-e3e7-4b80-b16c-1ed8851b7141 no longer exists [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:43.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-1775" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":98,"skipped":1936,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Watchers[0m [1mshould be able to restart watching from the last resource version observed by the previous watch [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Watchers ... skipping 18 lines ... May 14 12:49:44.107: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4081 bae25898-5966-41f5-931a-1692a24497ef 11631 0 2022-05-14 12:49:43 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-14 12:49:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 14 12:49:44.107: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4081 bae25898-5966-41f5-931a-1692a24497ef 11632 0 2022-05-14 12:49:43 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-14 12:49:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:49:44.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-4081" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":335,"completed":99,"skipped":1937,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mUpdate Demo[0m [1mshould scale a replication controller [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 129 lines ... May 14 12:50:05.295: INFO: stderr: "" May 14 12:50:05.295: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:50:05.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-4928" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":335,"completed":100,"skipped":1944,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mworks for CRD without validation schema [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 24 lines ... May 14 12:50:12.255: INFO: stderr: "" May 14 12:50:12.255: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4993-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:50:15.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-9011" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":335,"completed":101,"skipped":1959,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide DNS for the cluster [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 17 lines ... [1mSTEP[0m: deleting the pod [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:50:20.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-4087" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":335,"completed":102,"skipped":1965,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Deployment[0m [1mRecreateDeployment should delete old pods and create new ones [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Deployment ... skipping 26 lines ... May 14 12:50:22.914: INFO: Pod "test-recreate-deployment-5b99bd5487-mj9bd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5b99bd5487-mj9bd test-recreate-deployment-5b99bd5487- deployment-8145 4eb2fb9d-6e66-4a38-95bb-d7d0422c7b33 11959 0 2022-05-14 12:50:22 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5b99bd5487 6a346a54-2ac9-422f-b7c3-45a0581d554d 0xc006c16fc7 0xc006c16fc8}] [] [{Go-http-client Update v1 2022-05-14 12:50:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {kube-controller-manager Update v1 2022-05-14 12:50:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6a346a54-2ac9-422f-b7c3-45a0581d554d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hpvx5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpvx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-scnhc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:50:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:50:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:50:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:50:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2022-05-14 12:50:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:50:22.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-8145" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":335,"completed":103,"skipped":1972,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] ConfigMap[0m [1mshould run through a ConfigMap lifecycle [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] ConfigMap ... skipping 12 lines ... [1mSTEP[0m: deleting the ConfigMap by collection with a label selector [1mSTEP[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:50:23.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-8467" for this suite. [32m•[0m{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":335,"completed":104,"skipped":1975,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-instrumentation] Events API[0m [1mshould delete a collection of events [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-instrumentation] Events API ... skipping 13 lines ... May 14 12:50:23.391: INFO: requesting DeleteCollection of events [1mSTEP[0m: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:50:23.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-6395" for this suite. [32m•[0m{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":335,"completed":105,"skipped":1982,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Variable Expansion[0m [1mshould allow substituting values in a volume subpath [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Variable Expansion ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test substitution in volume subpath May 14 12:50:23.635: INFO: Waiting up to 5m0s for pod "var-expansion-b61c00a8-40eb-4754-9c51-e5e53a3dd51b" in namespace "var-expansion-2969" to be "Succeeded or Failed" May 14 12:50:23.655: INFO: Pod "var-expansion-b61c00a8-40eb-4754-9c51-e5e53a3dd51b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.789234ms May 14 12:50:25.683: INFO: Pod "var-expansion-b61c00a8-40eb-4754-9c51-e5e53a3dd51b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.04758231s [1mSTEP[0m: Saw pod success May 14 12:50:25.683: INFO: Pod "var-expansion-b61c00a8-40eb-4754-9c51-e5e53a3dd51b" satisfied condition "Succeeded or Failed" May 14 12:50:25.704: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod var-expansion-b61c00a8-40eb-4754-9c51-e5e53a3dd51b container dapi-container: <nil> [1mSTEP[0m: delete the pod May 14 12:50:25.775: INFO: Waiting for pod var-expansion-b61c00a8-40eb-4754-9c51-e5e53a3dd51b to disappear May 14 12:50:25.790: INFO: Pod var-expansion-b61c00a8-40eb-4754-9c51-e5e53a3dd51b no longer exists [AfterEach] [sig-node] Variable Expansion /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:50:25.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "var-expansion-2969" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":335,"completed":106,"skipped":2003,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be consumable from pods in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-0e0d123b-b339-4835-a618-cecb6bcb908d [1mSTEP[0m: Creating a pod to test consume secrets May 14 12:50:25.998: INFO: Waiting up to 5m0s for pod "pod-secrets-fe782005-a607-45a4-8418-0c66ec92cc4b" in namespace "secrets-2369" to be "Succeeded or Failed" May 14 12:50:26.022: INFO: Pod "pod-secrets-fe782005-a607-45a4-8418-0c66ec92cc4b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.772973ms May 14 12:50:28.038: INFO: Pod "pod-secrets-fe782005-a607-45a4-8418-0c66ec92cc4b": Phase="Running", Reason="", readiness=true. Elapsed: 2.040536633s May 14 12:50:30.055: INFO: Pod "pod-secrets-fe782005-a607-45a4-8418-0c66ec92cc4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057053671s [1mSTEP[0m: Saw pod success May 14 12:50:30.055: INFO: Pod "pod-secrets-fe782005-a607-45a4-8418-0c66ec92cc4b" satisfied condition "Succeeded or Failed" May 14 12:50:30.071: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-secrets-fe782005-a607-45a4-8418-0c66ec92cc4b container secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 12:50:30.124: INFO: Waiting for pod pod-secrets-fe782005-a607-45a4-8418-0c66ec92cc4b to disappear May 14 12:50:30.139: INFO: Pod pod-secrets-fe782005-a607-45a4-8418-0c66ec92cc4b no longer exists [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:50:30.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-2369" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":335,"completed":107,"skipped":2028,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-storage] Subpath[0m [90mAtomic writer volumes[0m [1mshould support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Subpath ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating pod pod-subpath-test-configmap-mgfn [1mSTEP[0m: Creating a pod to test atomic-volume-subpath May 14 12:50:30.348: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mgfn" in namespace "subpath-4484" to be "Succeeded or Failed" May 14 12:50:30.376: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Pending", Reason="", readiness=false. Elapsed: 27.844146ms May 14 12:50:32.394: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Running", Reason="", readiness=true. Elapsed: 2.046314781s May 14 12:50:34.412: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Running", Reason="", readiness=true. Elapsed: 4.063866759s May 14 12:50:36.428: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Running", Reason="", readiness=true. Elapsed: 6.079499135s May 14 12:50:38.443: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Running", Reason="", readiness=true. Elapsed: 8.094642776s May 14 12:50:40.459: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Running", Reason="", readiness=true. Elapsed: 10.110479397s May 14 12:50:42.474: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Running", Reason="", readiness=true. Elapsed: 12.126433234s May 14 12:50:44.491: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Running", Reason="", readiness=true. Elapsed: 14.143073238s May 14 12:50:46.508: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Running", Reason="", readiness=true. Elapsed: 16.160087263s May 14 12:50:48.529: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Running", Reason="", readiness=true. Elapsed: 18.181384542s May 14 12:50:50.545: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Running", Reason="", readiness=true. Elapsed: 20.197230019s May 14 12:50:52.561: INFO: Pod "pod-subpath-test-configmap-mgfn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.21281454s [1mSTEP[0m: Saw pod success May 14 12:50:52.561: INFO: Pod "pod-subpath-test-configmap-mgfn" satisfied condition "Succeeded or Failed" May 14 12:50:52.576: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-subpath-test-configmap-mgfn container test-container-subpath-configmap-mgfn: <nil> [1mSTEP[0m: delete the pod May 14 12:50:52.646: INFO: Waiting for pod pod-subpath-test-configmap-mgfn to disappear May 14 12:50:52.660: INFO: Pod pod-subpath-test-configmap-mgfn no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-configmap-mgfn May 14 12:50:52.660: INFO: Deleting pod "pod-subpath-test-configmap-mgfn" in namespace "subpath-4484" [AfterEach] [sig-storage] Subpath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:50:52.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "subpath-4484" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":335,"completed":108,"skipped":2029,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected secret[0m [1mshould be consumable from pods in volume with mappings [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected secret ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-map-deadd490-8faf-42e8-8007-ca2f8283038b [1mSTEP[0m: Creating a pod to test consume secrets May 14 12:50:52.876: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5f923bb7-2a91-4dae-a386-49d9b21fbe91" in namespace "projected-6734" to be "Succeeded or Failed" May 14 12:50:52.896: INFO: Pod "pod-projected-secrets-5f923bb7-2a91-4dae-a386-49d9b21fbe91": Phase="Pending", Reason="", readiness=false. Elapsed: 20.34365ms May 14 12:50:54.914: INFO: Pod "pod-projected-secrets-5f923bb7-2a91-4dae-a386-49d9b21fbe91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.037506702s [1mSTEP[0m: Saw pod success May 14 12:50:54.914: INFO: Pod "pod-projected-secrets-5f923bb7-2a91-4dae-a386-49d9b21fbe91" satisfied condition "Succeeded or Failed" May 14 12:50:54.928: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-projected-secrets-5f923bb7-2a91-4dae-a386-49d9b21fbe91 container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 12:50:54.987: INFO: Waiting for pod pod-projected-secrets-5f923bb7-2a91-4dae-a386-49d9b21fbe91 to disappear May 14 12:50:55.002: INFO: Pod pod-projected-secrets-5f923bb7-2a91-4dae-a386-49d9b21fbe91 no longer exists [AfterEach] [sig-storage] Projected secret /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:50:55.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-6734" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":335,"completed":109,"skipped":2031,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] PreStop[0m [1mshould call prestop when killing a pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] PreStop ... skipping 26 lines ... } [1mSTEP[0m: Deleting the server pod [AfterEach] [sig-node] PreStop /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:51:04.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "prestop-3631" for this suite. [32m•[0m{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":335,"completed":110,"skipped":2044,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 35 lines ... For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:51:05.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-8673" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":335,"completed":111,"skipped":2067,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould have session affinity work for NodePort service [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 51 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:51:17.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-3767" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":335,"completed":112,"skipped":2071,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mshould have monotonically increasing restart count [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 18 lines ... May 14 12:53:51.243: INFO: Restart count of pod container-probe-4696/liveness-4089719f-70bc-41b8-9f0b-1e8ac32d01b1 is now 5 (2m31.275839059s elapsed) [1mSTEP[0m: deleting the pod [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:53:51.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-probe-4696" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":335,"completed":113,"skipped":2074,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-auth] ServiceAccounts[0m [1mServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-auth] ServiceAccounts ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 May 14 12:53:51.493: INFO: created pod May 14 12:53:51.493: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-4508" to be "Succeeded or Failed" May 14 12:53:51.512: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 18.387674ms May 14 12:53:53.529: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 2.035059686s May 14 12:53:55.545: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051930882s [1mSTEP[0m: Saw pod success May 14 12:53:55.545: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" May 14 12:54:25.546: INFO: polling logs May 14 12:54:25.573: INFO: Pod logs: 2022/05/14 12:53:52 OK: Got token 2022/05/14 12:53:52 validating with in-cluster discovery 2022/05/14 12:53:52 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/05/14 12:53:52 Full, not-validated claims: ... skipping 6 lines ... May 14 12:54:25.573: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:54:25.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-4508" for this suite. [32m•[0m{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":335,"completed":114,"skipped":2168,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] IngressClass API[0m [1m should support creating IngressClass API operations [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] IngressClass API ... skipping 22 lines ... [1mSTEP[0m: deleting [1mSTEP[0m: deleting a collection [AfterEach] [sig-network] IngressClass API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:54:26.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ingressclass-6392" for this suite. [32m•[0m{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":335,"completed":115,"skipped":2211,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mremoves definition from spec when one version gets changed to not be served [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 11 lines ... [1mSTEP[0m: check the unserved version gets removed [1mSTEP[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:54:47.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-1283" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":335,"completed":116,"skipped":2216,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-apps] Deployment[0m [1mshould validate Deployment Status endpoints [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Deployment ... skipping 63 lines ... May 14 12:54:50.100: INFO: Pod "test-deployment-dwvnl-764bc7c4b7-sw6pd" is available: &Pod{ObjectMeta:{test-deployment-dwvnl-764bc7c4b7-sw6pd test-deployment-dwvnl-764bc7c4b7- deployment-6758 9576a778-d4d3-48aa-8efc-61b93afbe886 13248 0 2022-05-14 12:54:47 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[cni.projectcalico.org/containerID:fbf7793cf9502d496635bfc769a23d9866dd75d5cf06efc8639a60598ac000b9 cni.projectcalico.org/podIP:192.168.92.76/32 cni.projectcalico.org/podIPs:192.168.92.76/32] [{apps/v1 ReplicaSet test-deployment-dwvnl-764bc7c4b7 bfb01c3a-2edd-436c-8e0f-3bee358273e1 0xc005d143f0 0xc005d143f1}] [] [{kube-controller-manager Update v1 2022-05-14 12:54:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bfb01c3a-2edd-436c-8e0f-3bee358273e1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2022-05-14 12:54:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {Go-http-client Update v1 2022-05-14 12:54:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.92.76\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-l9jvn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9jvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-dxhn8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:54:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:54:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:54:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:54:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.92.76,StartTime:2022-05-14 12:54:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-14 12:54:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://3a76147cc861b01f9641647652f836b2f24902bda0e7dcbfd11331b79852a857,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.92.76,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:54:50.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-6758" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":335,"completed":117,"skipped":2217,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mpod should support shared volumes between containers [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 13 lines ... May 14 12:54:52.319: INFO: ExecWithOptions: execute(POST https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/emptydir-6309/pods/pod-sharedvolume-3a3b94df-fdad-4897-a57a-5d410d6f16f5/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true %!s(MISSING)) May 14 12:54:52.549: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:54:52.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-6309" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":335,"completed":118,"skipped":2233,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Secrets[0m [1mshould patch a secret [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Secrets ... skipping 11 lines ... [1mSTEP[0m: deleting the secret using a LabelSelector [1mSTEP[0m: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:54:52.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-8901" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":335,"completed":119,"skipped":2237,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl api-versions[0m [1mshould check if v1 is in available api versions [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 12 lines ... May 14 12:54:53.232: INFO: stderr: "" May 14 12:54:53.232: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:54:53.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-1775" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":335,"completed":120,"skipped":2249,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould provide container's cpu request [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 12:54:53.418: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac76bc4d-c648-499f-87ef-191f58dd538b" in namespace "downward-api-5060" to be "Succeeded or Failed" May 14 12:54:53.435: INFO: Pod "downwardapi-volume-ac76bc4d-c648-499f-87ef-191f58dd538b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.692537ms May 14 12:54:55.454: INFO: Pod "downwardapi-volume-ac76bc4d-c648-499f-87ef-191f58dd538b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036424597s [1mSTEP[0m: Saw pod success May 14 12:54:55.454: INFO: Pod "downwardapi-volume-ac76bc4d-c648-499f-87ef-191f58dd538b" satisfied condition "Succeeded or Failed" May 14 12:54:55.472: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod downwardapi-volume-ac76bc4d-c648-499f-87ef-191f58dd538b container client-container: <nil> [1mSTEP[0m: delete the pod May 14 12:54:55.561: INFO: Waiting for pod downwardapi-volume-ac76bc4d-c648-499f-87ef-191f58dd538b to disappear May 14 12:54:55.578: INFO: Pod downwardapi-volume-ac76bc4d-c648-499f-87ef-191f58dd538b no longer exists [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:54:55.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-5060" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":335,"completed":121,"skipped":2282,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould get a host IP [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 12 lines ... May 14 12:54:57.816: INFO: The status of Pod pod-hostip-9e5a8bb2-eae6-4914-b5d9-71e80b334e0a is Running (Ready = true) May 14 12:54:57.851: INFO: Pod pod-hostip-9e5a8bb2-eae6-4914-b5d9-71e80b334e0a has hostIP: 10.1.0.4 [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:54:57.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-5297" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":335,"completed":122,"skipped":2349,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] DisruptionController[0m [1mshould block an eviction until the PDB is updated to allow it [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] DisruptionController ... skipping 29 lines ... [1mSTEP[0m: Trying to evict the same pod we tried earlier which should now be evictable [1mSTEP[0m: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:55:04.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-835" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":335,"completed":123,"skipped":2360,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] EndpointSlice[0m [1mshould support creating EndpointSlice API operations [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] EndpointSlice ... skipping 25 lines ... [1mSTEP[0m: deleting [1mSTEP[0m: deleting a collection [AfterEach] [sig-network] EndpointSlice /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:55:05.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-9959" for this suite. [32m•[0m{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":335,"completed":124,"skipped":2363,"failed":0} [90m------------------------------[0m [0m[sig-storage] Subpath[0m [90mAtomic writer volumes[0m [1mshould support subpaths with secret pod [Excluded:WindowsDocker] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Subpath ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating pod pod-subpath-test-secret-hb9z [1mSTEP[0m: Creating a pod to test atomic-volume-subpath May 14 12:55:05.413: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hb9z" in namespace "subpath-6181" to be "Succeeded or Failed" May 14 12:55:05.441: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Pending", Reason="", readiness=false. Elapsed: 27.758789ms May 14 12:55:07.459: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Running", Reason="", readiness=true. Elapsed: 2.045529892s May 14 12:55:09.480: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Running", Reason="", readiness=true. Elapsed: 4.066538681s May 14 12:55:11.505: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Running", Reason="", readiness=true. Elapsed: 6.091753637s May 14 12:55:13.525: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Running", Reason="", readiness=true. Elapsed: 8.111568255s May 14 12:55:15.542: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Running", Reason="", readiness=true. Elapsed: 10.128757383s ... skipping 2 lines ... May 14 12:55:21.598: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Running", Reason="", readiness=true. Elapsed: 16.184842304s May 14 12:55:23.615: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Running", Reason="", readiness=true. Elapsed: 18.202405226s May 14 12:55:25.633: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Running", Reason="", readiness=true. Elapsed: 20.220207569s May 14 12:55:27.652: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Running", Reason="", readiness=true. Elapsed: 22.239200246s May 14 12:55:29.672: INFO: Pod "pod-subpath-test-secret-hb9z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.259342006s [1mSTEP[0m: Saw pod success May 14 12:55:29.672: INFO: Pod "pod-subpath-test-secret-hb9z" satisfied condition "Succeeded or Failed" May 14 12:55:29.689: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-subpath-test-secret-hb9z container test-container-subpath-secret-hb9z: <nil> [1mSTEP[0m: delete the pod May 14 12:55:29.765: INFO: Waiting for pod pod-subpath-test-secret-hb9z to disappear May 14 12:55:29.781: INFO: Pod pod-subpath-test-secret-hb9z no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-secret-hb9z May 14 12:55:29.781: INFO: Deleting pod "pod-subpath-test-secret-hb9z" in namespace "subpath-6181" [AfterEach] [sig-storage] Subpath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:55:29.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "subpath-6181" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]","total":335,"completed":125,"skipped":2363,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-auth] ServiceAccounts[0m [1mshould mount an API token into pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-auth] ServiceAccounts ... skipping 13 lines ... [1mSTEP[0m: reading a file in the container May 14 12:55:33.387: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl exec --namespace=svcaccounts-5565 pod-service-account-9282646b-0560-4e69-8a61-6c8ca1497410 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:55:33.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-5565" for this suite. [32m•[0m{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":335,"completed":126,"skipped":2377,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Kubelet[0m [90mwhen scheduling a busybox command in a pod[0m [1mshould print the output to logs [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Kubelet ... skipping 10 lines ... May 14 12:55:34.040: INFO: The status of Pod busybox-scheduling-64c86d04-0f26-4011-bf7d-e0178a871a5f is Pending, waiting for it to be Running (with Ready = true) May 14 12:55:36.061: INFO: The status of Pod busybox-scheduling-64c86d04-0f26-4011-bf7d-e0178a871a5f is Running (Ready = true) [AfterEach] [sig-node] Kubelet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:55:36.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-8065" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":335,"completed":127,"skipped":2393,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl diff[0m [1mshould check if kubectl diff finds a difference for Deployments [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 18 lines ... May 14 12:55:38.284: INFO: stderr: "" May 14 12:55:38.284: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:55:38.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-115" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":335,"completed":128,"skipped":2403,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1moptional updates should be reflected in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 16 lines ... [1mSTEP[0m: Creating secret with name s-test-opt-create-4c8cab15-d888-4521-ad63-f3bcfe484df9 [1mSTEP[0m: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:56:49.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-1731" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":129,"skipped":2426,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould provide container's memory request [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 12:56:49.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d12169e-24c0-438d-90d5-8455ea606d21" in namespace "projected-1136" to be "Succeeded or Failed" May 14 12:56:49.915: INFO: Pod "downwardapi-volume-7d12169e-24c0-438d-90d5-8455ea606d21": Phase="Pending", Reason="", readiness=false. Elapsed: 18.975193ms May 14 12:56:51.936: INFO: Pod "downwardapi-volume-7d12169e-24c0-438d-90d5-8455ea606d21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039859249s [1mSTEP[0m: Saw pod success May 14 12:56:51.936: INFO: Pod "downwardapi-volume-7d12169e-24c0-438d-90d5-8455ea606d21" satisfied condition "Succeeded or Failed" May 14 12:56:51.955: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod downwardapi-volume-7d12169e-24c0-438d-90d5-8455ea606d21 container client-container: <nil> [1mSTEP[0m: delete the pod May 14 12:56:52.052: INFO: Waiting for pod downwardapi-volume-7d12169e-24c0-438d-90d5-8455ea606d21 to disappear May 14 12:56:52.069: INFO: Pod downwardapi-volume-7d12169e-24c0-438d-90d5-8455ea606d21 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:56:52.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-1136" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":335,"completed":130,"skipped":2434,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Deployment[0m [1mDeployment should have a working scale subresource [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Deployment ... skipping 25 lines ... May 14 12:56:54.565: INFO: Pod "test-new-deployment-5d9fdcc779-krcqh" is not available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-krcqh test-new-deployment-5d9fdcc779- deployment-8509 235aa040-a08b-4e3d-97ed-2d66fc0b87ab 14139 0 2022-05-14 12:56:54 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 31591b97-a8fb-42ed-8730-3946b86e4e78 0xc0026c09a0 0xc0026c09a1}] [] [{Go-http-client Update v1 2022-05-14 12:56:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {kube-controller-manager Update v1 2022-05-14 12:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31591b97-a8fb-42ed-8730-3946b86e4e78\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8wdgl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8wdgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-scnhc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:56:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:56:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:56:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 12:56:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2022-05-14 12:56:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:56:54.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-8509" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":335,"completed":131,"skipped":2442,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould update annotations on modification [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 12 lines ... May 14 12:56:56.827: INFO: The status of Pod annotationupdate97436859-c625-4329-97fa-5ba47c80fc42 is Running (Ready = true) May 14 12:56:57.433: INFO: Successfully updated pod "annotationupdate97436859-c625-4329-97fa-5ba47c80fc42" [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:56:59.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-1716" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":335,"completed":132,"skipped":2463,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Service endpoints latency[0m [1mshould not be very high [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Service endpoints latency ... skipping 418 lines ... May 14 12:57:10.505: INFO: 99 %ile: 940.957141ms May 14 12:57:10.505: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:10.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svc-latency-4499" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":335,"completed":133,"skipped":2479,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] HostPath[0m [1mshould support subPath [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93[0m [BeforeEach] [sig-storage] HostPath ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 [1mSTEP[0m: Creating a pod to test hostPath subPath May 14 12:57:10.694: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3205" to be "Succeeded or Failed" May 14 12:57:10.716: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.6748ms May 14 12:57:12.736: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.042669445s [1mSTEP[0m: Saw pod success May 14 12:57:12.737: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 14 12:57:12.755: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-host-path-test container test-container-2: <nil> [1mSTEP[0m: delete the pod May 14 12:57:12.821: INFO: Waiting for pod pod-host-path-test to disappear May 14 12:57:12.838: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:12.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "hostpath-3205" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":335,"completed":134,"skipped":2502,"failed":0} [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1mshould be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-a0e35b2f-719b-45d1-959c-077d918ac433 [1mSTEP[0m: Creating a pod to test consume configMaps May 14 12:57:13.044: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-432bda03-57ac-47b0-85e2-77db5c4b7417" in namespace "projected-6349" to be "Succeeded or Failed" May 14 12:57:13.069: INFO: Pod "pod-projected-configmaps-432bda03-57ac-47b0-85e2-77db5c4b7417": Phase="Pending", Reason="", readiness=false. Elapsed: 25.00588ms May 14 12:57:15.087: INFO: Pod "pod-projected-configmaps-432bda03-57ac-47b0-85e2-77db5c4b7417": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.04295951s [1mSTEP[0m: Saw pod success May 14 12:57:15.087: INFO: Pod "pod-projected-configmaps-432bda03-57ac-47b0-85e2-77db5c4b7417" satisfied condition "Succeeded or Failed" May 14 12:57:15.103: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-projected-configmaps-432bda03-57ac-47b0-85e2-77db5c4b7417 container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 12:57:15.221: INFO: Waiting for pod pod-projected-configmaps-432bda03-57ac-47b0-85e2-77db5c4b7417 to disappear May 14 12:57:15.237: INFO: Pod pod-projected-configmaps-432bda03-57ac-47b0-85e2-77db5c4b7417 no longer exists [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:15.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-6349" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":135,"skipped":2502,"failed":0} [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1mshould be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-map-1403395e-acd4-4ff4-8838-0168df6ae7b2 [1mSTEP[0m: Creating a pod to test consume configMaps May 14 12:57:15.466: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4fb2b8e0-2d07-4620-ae0b-0ab7ca5cee3d" in namespace "projected-3198" to be "Succeeded or Failed" May 14 12:57:15.493: INFO: Pod "pod-projected-configmaps-4fb2b8e0-2d07-4620-ae0b-0ab7ca5cee3d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.513128ms May 14 12:57:17.512: INFO: Pod "pod-projected-configmaps-4fb2b8e0-2d07-4620-ae0b-0ab7ca5cee3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046092109s May 14 12:57:19.531: INFO: Pod "pod-projected-configmaps-4fb2b8e0-2d07-4620-ae0b-0ab7ca5cee3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065084374s [1mSTEP[0m: Saw pod success May 14 12:57:19.531: INFO: Pod "pod-projected-configmaps-4fb2b8e0-2d07-4620-ae0b-0ab7ca5cee3d" satisfied condition "Succeeded or Failed" May 14 12:57:19.548: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-projected-configmaps-4fb2b8e0-2d07-4620-ae0b-0ab7ca5cee3d container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 12:57:19.640: INFO: Waiting for pod pod-projected-configmaps-4fb2b8e0-2d07-4620-ae0b-0ab7ca5cee3d to disappear May 14 12:57:19.657: INFO: Pod pod-projected-configmaps-4fb2b8e0-2d07-4620-ae0b-0ab7ca5cee3d no longer exists [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:19.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-3198" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":335,"completed":136,"skipped":2502,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir wrapper volumes[0m [1mshould not conflict [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes ... skipping 11 lines ... [1mSTEP[0m: Cleaning up the configmap [1mSTEP[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:22.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-wrapper-3331" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":335,"completed":137,"skipped":2511,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mon terminated container[0m [1mshould report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Runtime ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename container-runtime [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: create the container [1mSTEP[0m: wait for the container to reach Failed [1mSTEP[0m: get the container status [1mSTEP[0m: the container should be terminated [1mSTEP[0m: the termination message should be set May 14 12:57:25.808: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:25.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-3694" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":335,"completed":138,"skipped":2552,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 17 lines ... [1mSTEP[0m: deleting the pod [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:28.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-3548" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":335,"completed":139,"skipped":2553,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-5b8a0c5d-4b84-43e7-80c4-ed99b6b67ebf [1mSTEP[0m: Creating a pod to test consume configMaps May 14 12:57:28.635: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3d090ba-657b-4f46-b50c-ce4637f8d621" in namespace "configmap-1727" to be "Succeeded or Failed" May 14 12:57:28.670: INFO: Pod "pod-configmaps-c3d090ba-657b-4f46-b50c-ce4637f8d621": Phase="Pending", Reason="", readiness=false. Elapsed: 34.54215ms May 14 12:57:30.688: INFO: Pod "pod-configmaps-c3d090ba-657b-4f46-b50c-ce4637f8d621": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.052665929s [1mSTEP[0m: Saw pod success May 14 12:57:30.688: INFO: Pod "pod-configmaps-c3d090ba-657b-4f46-b50c-ce4637f8d621" satisfied condition "Succeeded or Failed" May 14 12:57:30.705: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-configmaps-c3d090ba-657b-4f46-b50c-ce4637f8d621 container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 12:57:30.764: INFO: Waiting for pod pod-configmaps-c3d090ba-657b-4f46-b50c-ce4637f8d621 to disappear May 14 12:57:30.780: INFO: Pod pod-configmaps-c3d090ba-657b-4f46-b50c-ce4637f8d621 no longer exists [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:30.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-1727" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":335,"completed":140,"skipped":2570,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Docker Containers[0m [1mshould be able to override the image's default command and arguments [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Docker Containers ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test override all May 14 12:57:30.983: INFO: Waiting up to 5m0s for pod "client-containers-030313d0-40b4-4f51-9643-0570c46e4d86" in namespace "containers-3524" to be "Succeeded or Failed" May 14 12:57:31.004: INFO: Pod "client-containers-030313d0-40b4-4f51-9643-0570c46e4d86": Phase="Pending", Reason="", readiness=false. Elapsed: 20.404653ms May 14 12:57:33.021: INFO: Pod "client-containers-030313d0-40b4-4f51-9643-0570c46e4d86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.037542786s [1mSTEP[0m: Saw pod success May 14 12:57:33.021: INFO: Pod "client-containers-030313d0-40b4-4f51-9643-0570c46e4d86" satisfied condition "Succeeded or Failed" May 14 12:57:33.038: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod client-containers-030313d0-40b4-4f51-9643-0570c46e4d86 container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 12:57:33.135: INFO: Waiting for pod client-containers-030313d0-40b4-4f51-9643-0570c46e4d86 to disappear May 14 12:57:33.155: INFO: Pod client-containers-030313d0-40b4-4f51-9643-0570c46e4d86 no longer exists [AfterEach] [sig-node] Docker Containers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:33.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "containers-3524" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":335,"completed":141,"skipped":2618,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [1mshould support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Security Context ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename security-context [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 14 12:57:33.348: INFO: Waiting up to 5m0s for pod "security-context-3c07ee12-6d17-4469-b61d-9fea9e4d0efb" in namespace "security-context-2526" to be "Succeeded or Failed" May 14 12:57:33.369: INFO: Pod "security-context-3c07ee12-6d17-4469-b61d-9fea9e4d0efb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.550364ms May 14 12:57:35.387: INFO: Pod "security-context-3c07ee12-6d17-4469-b61d-9fea9e4d0efb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.03890315s [1mSTEP[0m: Saw pod success May 14 12:57:35.387: INFO: Pod "security-context-3c07ee12-6d17-4469-b61d-9fea9e4d0efb" satisfied condition "Succeeded or Failed" May 14 12:57:35.406: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod security-context-3c07ee12-6d17-4469-b61d-9fea9e4d0efb container test-container: <nil> [1mSTEP[0m: delete the pod May 14 12:57:35.488: INFO: Waiting for pod security-context-3c07ee12-6d17-4469-b61d-9fea9e4d0efb to disappear May 14 12:57:35.504: INFO: Pod security-context-3c07ee12-6d17-4469-b61d-9fea9e4d0efb no longer exists [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:35.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-2526" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":335,"completed":142,"skipped":2641,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mworks for CRD preserving unknown fields in an embedded object [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 24 lines ... May 14 12:57:42.678: INFO: stderr: "" May 14 12:57:42.679: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8961-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:47.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-1151" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":335,"completed":143,"skipped":2659,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould provide container's memory limit [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 12:57:47.305: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90e5cbc8-7a5e-4928-bd47-f5c397dffa7f" in namespace "projected-9753" to be "Succeeded or Failed" May 14 12:57:47.327: INFO: Pod "downwardapi-volume-90e5cbc8-7a5e-4928-bd47-f5c397dffa7f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.863773ms May 14 12:57:49.346: INFO: Pod "downwardapi-volume-90e5cbc8-7a5e-4928-bd47-f5c397dffa7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.041139432s [1mSTEP[0m: Saw pod success May 14 12:57:49.346: INFO: Pod "downwardapi-volume-90e5cbc8-7a5e-4928-bd47-f5c397dffa7f" satisfied condition "Succeeded or Failed" May 14 12:57:49.364: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod downwardapi-volume-90e5cbc8-7a5e-4928-bd47-f5c397dffa7f container client-container: <nil> [1mSTEP[0m: delete the pod May 14 12:57:49.443: INFO: Waiting for pod downwardapi-volume-90e5cbc8-7a5e-4928-bd47-f5c397dffa7f to disappear May 14 12:57:49.460: INFO: Pod downwardapi-volume-90e5cbc8-7a5e-4928-bd47-f5c397dffa7f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:49.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-9753" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":335,"completed":144,"skipped":2664,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Proxy[0m [90mversion v1[0m [1mA set of valid responses are returned for both pod and service ProxyWithPath [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] version v1 ... skipping 41 lines ... May 14 12:57:54.044: INFO: Starting http.Client for https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/proxy-4731/services/test-service/proxy/some/path/with/PUT May 14 12:57:54.066: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:57:54.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "proxy-4731" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":335,"completed":145,"skipped":2666,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould have session affinity work for service with type clusterIP [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 45 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:58:04.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-169" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":335,"completed":146,"skipped":2699,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Proxy[0m [90mversion v1[0m [1mshould proxy through a service and a pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] version v1 ... skipping 336 lines ... May 14 12:58:08.072: INFO: Deleting ReplicationController proxy-service-v7lnr took: 35.533821ms May 14 12:58:08.173: INFO: Terminating ReplicationController proxy-service-v7lnr pods took: 101.158825ms [AfterEach] version v1 /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:58:10.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "proxy-1162" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":335,"completed":147,"skipped":2737,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] ConfigMap[0m [1mshould be consumable via environment variable [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap configmap-8154/configmap-test-d1ddce90-d9be-4def-a163-d639d4690a79 [1mSTEP[0m: Creating a pod to test consume configMaps May 14 12:58:11.088: INFO: Waiting up to 5m0s for pod "pod-configmaps-702f928e-72b9-440f-8dcd-0a6bca0bb5f4" in namespace "configmap-8154" to be "Succeeded or Failed" May 14 12:58:11.111: INFO: Pod "pod-configmaps-702f928e-72b9-440f-8dcd-0a6bca0bb5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.615047ms May 14 12:58:13.149: INFO: Pod "pod-configmaps-702f928e-72b9-440f-8dcd-0a6bca0bb5f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060223865s [1mSTEP[0m: Saw pod success May 14 12:58:13.149: INFO: Pod "pod-configmaps-702f928e-72b9-440f-8dcd-0a6bca0bb5f4" satisfied condition "Succeeded or Failed" May 14 12:58:13.167: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-configmaps-702f928e-72b9-440f-8dcd-0a6bca0bb5f4 container env-test: <nil> [1mSTEP[0m: delete the pod May 14 12:58:13.241: INFO: Waiting for pod pod-configmaps-702f928e-72b9-440f-8dcd-0a6bca0bb5f4 to disappear May 14 12:58:13.258: INFO: Pod pod-configmaps-702f928e-72b9-440f-8dcd-0a6bca0bb5f4 no longer exists [AfterEach] [sig-node] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:58:13.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-8154" for this suite. [32m•[0m{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":335,"completed":148,"skipped":2745,"failed":0} [90m------------------------------[0m [0m[sig-instrumentation] Events API[0m [1mshould ensure that an event can be fetched, patched, deleted, and listed [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-instrumentation] Events API ... skipping 21 lines ... [1mSTEP[0m: listing events in all namespaces [1mSTEP[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:58:13.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "events-5334" for this suite. [32m•[0m{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":335,"completed":149,"skipped":2745,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] ConfigMap[0m [1mshould be consumable via the environment [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap configmap-7859/configmap-test-d47ab7ca-d832-4c70-95c8-1a9fae5dc0f2 [1mSTEP[0m: Creating a pod to test consume configMaps May 14 12:58:13.929: INFO: Waiting up to 5m0s for pod "pod-configmaps-42a3858d-8da1-4fb6-bb93-fe553368d6a1" in namespace "configmap-7859" to be "Succeeded or Failed" May 14 12:58:13.952: INFO: Pod "pod-configmaps-42a3858d-8da1-4fb6-bb93-fe553368d6a1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.765198ms May 14 12:58:15.971: INFO: Pod "pod-configmaps-42a3858d-8da1-4fb6-bb93-fe553368d6a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.042021359s [1mSTEP[0m: Saw pod success May 14 12:58:15.971: INFO: Pod "pod-configmaps-42a3858d-8da1-4fb6-bb93-fe553368d6a1" satisfied condition "Succeeded or Failed" May 14 12:58:15.988: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-configmaps-42a3858d-8da1-4fb6-bb93-fe553368d6a1 container env-test: <nil> [1mSTEP[0m: delete the pod May 14 12:58:16.063: INFO: Waiting for pod pod-configmaps-42a3858d-8da1-4fb6-bb93-fe553368d6a1 to disappear May 14 12:58:16.082: INFO: Pod pod-configmaps-42a3858d-8da1-4fb6-bb93-fe553368d6a1 no longer exists [AfterEach] [sig-node] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:58:16.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-7859" for this suite. [32m•[0m{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":335,"completed":150,"skipped":2785,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Docker Containers[0m [1mshould be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Docker Containers ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test override command May 14 12:58:16.282: INFO: Waiting up to 5m0s for pod "client-containers-57aa58d6-e1ad-46fb-b82f-50c9eea8f18f" in namespace "containers-7659" to be "Succeeded or Failed" May 14 12:58:16.307: INFO: Pod "client-containers-57aa58d6-e1ad-46fb-b82f-50c9eea8f18f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.57048ms May 14 12:58:18.326: INFO: Pod "client-containers-57aa58d6-e1ad-46fb-b82f-50c9eea8f18f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.044723812s [1mSTEP[0m: Saw pod success May 14 12:58:18.326: INFO: Pod "client-containers-57aa58d6-e1ad-46fb-b82f-50c9eea8f18f" satisfied condition "Succeeded or Failed" May 14 12:58:18.343: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod client-containers-57aa58d6-e1ad-46fb-b82f-50c9eea8f18f container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 12:58:18.411: INFO: Waiting for pod client-containers-57aa58d6-e1ad-46fb-b82f-50c9eea8f18f to disappear May 14 12:58:18.428: INFO: Pod client-containers-57aa58d6-e1ad-46fb-b82f-50c9eea8f18f no longer exists [AfterEach] [sig-node] Docker Containers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:58:18.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "containers-7659" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":335,"completed":151,"skipped":2815,"failed":0} [90m------------------------------[0m [0m[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin][0m [90mCustomResourceDefinition Watch[0m [1mwatch on custom resource definition objects [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] ... skipping 19 lines ... [1mSTEP[0m: Deleting second CR May 14 12:59:11.476: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-14T12:58:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-14T12:58:51Z]] name:name2 resourceVersion:17051 uid:6719b360-f22b-4788-a339-cf8e69f37761] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:59:22.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-watch-6002" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":335,"completed":152,"skipped":2815,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Kubelet[0m [90mwhen scheduling a read only busybox container[0m [1mshould not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Kubelet ... skipping 11 lines ... May 14 12:59:24.332: INFO: The status of Pod busybox-readonly-fs00688370-beb3-4f92-8f38-ea4f5f97f2a7 is Pending, waiting for it to be Running (with Ready = true) May 14 12:59:26.331: INFO: The status of Pod busybox-readonly-fs00688370-beb3-4f92-8f38-ea4f5f97f2a7 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:59:26.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-2132" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":153,"skipped":2819,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 12:59:26.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c604af5-b946-4a18-bf6c-ccabaa97d80c" in namespace "projected-9424" to be "Succeeded or Failed" May 14 12:59:26.584: INFO: Pod "downwardapi-volume-4c604af5-b946-4a18-bf6c-ccabaa97d80c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.50561ms May 14 12:59:28.604: INFO: Pod "downwardapi-volume-4c604af5-b946-4a18-bf6c-ccabaa97d80c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040724761s [1mSTEP[0m: Saw pod success May 14 12:59:28.604: INFO: Pod "downwardapi-volume-4c604af5-b946-4a18-bf6c-ccabaa97d80c" satisfied condition "Succeeded or Failed" May 14 12:59:28.621: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod downwardapi-volume-4c604af5-b946-4a18-bf6c-ccabaa97d80c container client-container: <nil> [1mSTEP[0m: delete the pod May 14 12:59:28.685: INFO: Waiting for pod downwardapi-volume-4c604af5-b946-4a18-bf6c-ccabaa97d80c to disappear May 14 12:59:28.703: INFO: Pod downwardapi-volume-4c604af5-b946-4a18-bf6c-ccabaa97d80c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:59:28.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-9424" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":335,"completed":154,"skipped":2879,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name configmap-test-volume-map-7efacf9a-45cc-4e41-b7c9-4f93440b76db [1mSTEP[0m: Creating a pod to test consume configMaps May 14 12:59:28.922: INFO: Waiting up to 5m0s for pod "pod-configmaps-e458fd22-8349-4c1a-bfdd-4ba4bf9dac2b" in namespace "configmap-4388" to be "Succeeded or Failed" May 14 12:59:28.941: INFO: Pod "pod-configmaps-e458fd22-8349-4c1a-bfdd-4ba4bf9dac2b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.834843ms May 14 12:59:30.961: INFO: Pod "pod-configmaps-e458fd22-8349-4c1a-bfdd-4ba4bf9dac2b": Phase="Running", Reason="", readiness=true. Elapsed: 2.039567991s May 14 12:59:32.980: INFO: Pod "pod-configmaps-e458fd22-8349-4c1a-bfdd-4ba4bf9dac2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058303763s [1mSTEP[0m: Saw pod success May 14 12:59:32.980: INFO: Pod "pod-configmaps-e458fd22-8349-4c1a-bfdd-4ba4bf9dac2b" satisfied condition "Succeeded or Failed" May 14 12:59:32.997: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-configmaps-e458fd22-8349-4c1a-bfdd-4ba4bf9dac2b container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 12:59:33.065: INFO: Waiting for pod pod-configmaps-e458fd22-8349-4c1a-bfdd-4ba4bf9dac2b to disappear May 14 12:59:33.082: INFO: Pod pod-configmaps-e458fd22-8349-4c1a-bfdd-4ba4bf9dac2b no longer exists [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:59:33.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-4388" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":155,"skipped":2896,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mvolume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir volume type on node default medium May 14 12:59:33.262: INFO: Waiting up to 5m0s for pod "pod-1e02cd4c-1a66-424a-a36a-7b7dbc345355" in namespace "emptydir-2130" to be "Succeeded or Failed" May 14 12:59:33.282: INFO: Pod "pod-1e02cd4c-1a66-424a-a36a-7b7dbc345355": Phase="Pending", Reason="", readiness=false. Elapsed: 19.500351ms May 14 12:59:35.303: INFO: Pod "pod-1e02cd4c-1a66-424a-a36a-7b7dbc345355": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040839044s [1mSTEP[0m: Saw pod success May 14 12:59:35.303: INFO: Pod "pod-1e02cd4c-1a66-424a-a36a-7b7dbc345355" satisfied condition "Succeeded or Failed" May 14 12:59:35.455: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-1e02cd4c-1a66-424a-a36a-7b7dbc345355 container test-container: <nil> [1mSTEP[0m: delete the pod May 14 12:59:35.795: INFO: Waiting for pod pod-1e02cd4c-1a66-424a-a36a-7b7dbc345355 to disappear May 14 12:59:35.812: INFO: Pod pod-1e02cd4c-1a66-424a-a36a-7b7dbc345355 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 12:59:35.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-2130" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":156,"skipped":2908,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould resolve DNS of partial qualified names for services [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 27 lines ... May 14 12:59:38.448: INFO: Unable to read jessie_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:38.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:38.487: INFO: Unable to read jessie_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:38.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:38.525: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:38.543: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:38.616: INFO: Lookups using dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6614 wheezy_tcp@dns-test-service.dns-6614 wheezy_udp@dns-test-service.dns-6614.svc wheezy_tcp@dns-test-service.dns-6614.svc wheezy_udp@_http._tcp.dns-test-service.dns-6614.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6614.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6614 jessie_tcp@dns-test-service.dns-6614 jessie_udp@dns-test-service.dns-6614.svc jessie_tcp@dns-test-service.dns-6614.svc jessie_udp@_http._tcp.dns-test-service.dns-6614.svc jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc] May 14 12:59:43.636: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:43.660: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:43.679: INFO: Unable to read wheezy_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:43.698: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:43.716: INFO: Unable to read wheezy_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) ... skipping 5 lines ... May 14 12:59:43.906: INFO: Unable to read jessie_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:43.925: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:43.944: INFO: Unable to read jessie_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:43.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:43.985: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:44.003: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:44.077: INFO: Lookups using dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6614 wheezy_tcp@dns-test-service.dns-6614 wheezy_udp@dns-test-service.dns-6614.svc wheezy_tcp@dns-test-service.dns-6614.svc wheezy_udp@_http._tcp.dns-test-service.dns-6614.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6614.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6614 jessie_tcp@dns-test-service.dns-6614 jessie_udp@dns-test-service.dns-6614.svc jessie_tcp@dns-test-service.dns-6614.svc jessie_udp@_http._tcp.dns-test-service.dns-6614.svc jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc] May 14 12:59:48.636: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:48.659: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:48.678: INFO: Unable to read wheezy_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:48.696: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:48.714: INFO: Unable to read wheezy_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) ... skipping 5 lines ... May 14 12:59:48.903: INFO: Unable to read jessie_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:48.923: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:48.941: INFO: Unable to read jessie_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:48.959: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:48.977: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:48.995: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:49.068: INFO: Lookups using dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6614 wheezy_tcp@dns-test-service.dns-6614 wheezy_udp@dns-test-service.dns-6614.svc wheezy_tcp@dns-test-service.dns-6614.svc wheezy_udp@_http._tcp.dns-test-service.dns-6614.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6614.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6614 jessie_tcp@dns-test-service.dns-6614 jessie_udp@dns-test-service.dns-6614.svc jessie_tcp@dns-test-service.dns-6614.svc jessie_udp@_http._tcp.dns-test-service.dns-6614.svc jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc] May 14 12:59:53.635: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:53.656: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:53.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:53.696: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:53.715: INFO: Unable to read wheezy_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) ... skipping 5 lines ... May 14 12:59:53.907: INFO: Unable to read jessie_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:53.932: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:53.950: INFO: Unable to read jessie_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:53.969: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:53.988: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:54.008: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:54.087: INFO: Lookups using dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6614 wheezy_tcp@dns-test-service.dns-6614 wheezy_udp@dns-test-service.dns-6614.svc wheezy_tcp@dns-test-service.dns-6614.svc wheezy_udp@_http._tcp.dns-test-service.dns-6614.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6614.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6614 jessie_tcp@dns-test-service.dns-6614 jessie_udp@dns-test-service.dns-6614.svc jessie_tcp@dns-test-service.dns-6614.svc jessie_udp@_http._tcp.dns-test-service.dns-6614.svc jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc] May 14 12:59:58.635: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:58.656: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:58.674: INFO: Unable to read wheezy_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:58.693: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:58.712: INFO: Unable to read wheezy_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) ... skipping 5 lines ... May 14 12:59:58.897: INFO: Unable to read jessie_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:58.915: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:58.934: INFO: Unable to read jessie_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:58.952: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:58.970: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:58.988: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 12:59:59.061: INFO: Lookups using dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6614 wheezy_tcp@dns-test-service.dns-6614 wheezy_udp@dns-test-service.dns-6614.svc wheezy_tcp@dns-test-service.dns-6614.svc wheezy_udp@_http._tcp.dns-test-service.dns-6614.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6614.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6614 jessie_tcp@dns-test-service.dns-6614 jessie_udp@dns-test-service.dns-6614.svc jessie_tcp@dns-test-service.dns-6614.svc jessie_udp@_http._tcp.dns-test-service.dns-6614.svc jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc] May 14 13:00:03.640: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:03.665: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:03.685: INFO: Unable to read wheezy_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:03.705: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:03.723: INFO: Unable to read wheezy_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) ... skipping 5 lines ... May 14 13:00:03.929: INFO: Unable to read jessie_udp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:03.952: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614 from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:03.971: INFO: Unable to read jessie_udp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:03.989: INFO: Unable to read jessie_tcp@dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:04.011: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:04.029: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:04.103: INFO: Lookups using dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6614 wheezy_tcp@dns-test-service.dns-6614 wheezy_udp@dns-test-service.dns-6614.svc wheezy_tcp@dns-test-service.dns-6614.svc wheezy_udp@_http._tcp.dns-test-service.dns-6614.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6614.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6614 jessie_tcp@dns-test-service.dns-6614 jessie_udp@dns-test-service.dns-6614.svc jessie_tcp@dns-test-service.dns-6614.svc jessie_udp@_http._tcp.dns-test-service.dns-6614.svc jessie_tcp@_http._tcp.dns-test-service.dns-6614.svc] May 14 13:00:08.750: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6614.svc from pod dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268: the server could not find the requested resource (get pods dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268) May 14 13:00:09.065: INFO: Lookups using dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-6614.svc] May 14 13:00:14.086: INFO: DNS probes using dns-6614/dns-test-7cee7b50-ae6c-4628-9529-be4d525b1268 succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test service [1mSTEP[0m: deleting the test headless service [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:00:14.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-6614" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":335,"completed":157,"skipped":2929,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicaSet[0m [1mshould adopt matching pods on creation and release no longer matching pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicaSet ... skipping 15 lines ... May 14 13:00:18.608: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 [1mSTEP[0m: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:00:18.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replicaset-2239" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":335,"completed":158,"skipped":2931,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide DNS for services [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 15 lines ... [1mSTEP[0m: retrieving the pod [1mSTEP[0m: looking for the results for each expected name from probers May 14 13:00:23.084: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:23.102: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:23.232: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:23.251: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:23.325: INFO: Lookups using dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local] May 14 13:00:28.383: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:28.401: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:28.532: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:28.551: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:28.632: INFO: Lookups using dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local] May 14 13:00:33.385: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:33.403: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:33.533: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:33.553: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:33.625: INFO: Lookups using dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local] May 14 13:00:38.381: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:38.400: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:38.534: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:38.553: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:38.628: INFO: Lookups using dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local] May 14 13:00:43.382: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:43.401: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:43.728: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:43.748: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:43.838: INFO: Lookups using dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local] May 14 13:00:48.385: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:48.403: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:48.532: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:48.550: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local from pod dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452: the server could not find the requested resource (get pods dns-test-c208d276-f937-4da8-a75a-88013ef8e452) May 14 13:00:48.623: INFO: Lookups using dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9402.svc.cluster.local] May 14 13:00:53.624: INFO: DNS probes using dns-9402/dns-test-c208d276-f937-4da8-a75a-88013ef8e452 succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test service [1mSTEP[0m: deleting the test headless service [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:00:53.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-9402" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":335,"completed":159,"skipped":2975,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0777 on tmpfs May 14 13:00:54.015: INFO: Waiting up to 5m0s for pod "pod-4cfc1cda-b138-4a8c-ba7d-fe49fa7e2160" in namespace "emptydir-7648" to be "Succeeded or Failed" May 14 13:00:54.036: INFO: Pod "pod-4cfc1cda-b138-4a8c-ba7d-fe49fa7e2160": Phase="Pending", Reason="", readiness=false. Elapsed: 20.811718ms May 14 13:00:56.055: INFO: Pod "pod-4cfc1cda-b138-4a8c-ba7d-fe49fa7e2160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039289099s May 14 13:00:58.074: INFO: Pod "pod-4cfc1cda-b138-4a8c-ba7d-fe49fa7e2160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058332818s [1mSTEP[0m: Saw pod success May 14 13:00:58.074: INFO: Pod "pod-4cfc1cda-b138-4a8c-ba7d-fe49fa7e2160" satisfied condition "Succeeded or Failed" May 14 13:00:58.091: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-4cfc1cda-b138-4a8c-ba7d-fe49fa7e2160 container test-container: <nil> [1mSTEP[0m: delete the pod May 14 13:00:58.165: INFO: Waiting for pod pod-4cfc1cda-b138-4a8c-ba7d-fe49fa7e2160 to disappear May 14 13:00:58.182: INFO: Pod pod-4cfc1cda-b138-4a8c-ba7d-fe49fa7e2160 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:00:58.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-7648" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":160,"skipped":3027,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould serve multiport endpoints from pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 43 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:01:08.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-1275" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":335,"completed":161,"skipped":3066,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould complete a service status lifecycle [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 43 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:01:09.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-5436" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":335,"completed":162,"skipped":3080,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Lifecycle Hook[0m [90mwhen create a pod with lifecycle hook[0m [1mshould execute prestop exec hook properly [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Lifecycle Hook ... skipping 22 lines ... May 14 13:01:17.614: INFO: Pod pod-with-prestop-exec-hook no longer exists [1mSTEP[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:01:17.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-lifecycle-hook-3222" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":335,"completed":163,"skipped":3088,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Kubelet[0m [90mwhen scheduling a busybox command that always fails in a pod[0m [1mshould have an terminated reason [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Kubelet ... skipping 10 lines ... [It] should have an terminated reason [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:01:21.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-7175" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":335,"completed":164,"skipped":3103,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be consumable in multiple volumes in the same pod [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name configmap-test-volume-421e84e9-a8f6-44c1-8140-919f18bdab8b [1mSTEP[0m: Creating a pod to test consume configMaps May 14 13:01:22.104: INFO: Waiting up to 5m0s for pod "pod-configmaps-e7d356da-35af-446a-8951-0fe38e99d01c" in namespace "configmap-9808" to be "Succeeded or Failed" May 14 13:01:22.126: INFO: Pod "pod-configmaps-e7d356da-35af-446a-8951-0fe38e99d01c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.114628ms May 14 13:01:24.147: INFO: Pod "pod-configmaps-e7d356da-35af-446a-8951-0fe38e99d01c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.043026386s [1mSTEP[0m: Saw pod success May 14 13:01:24.148: INFO: Pod "pod-configmaps-e7d356da-35af-446a-8951-0fe38e99d01c" satisfied condition "Succeeded or Failed" May 14 13:01:24.165: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-configmaps-e7d356da-35af-446a-8951-0fe38e99d01c container configmap-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 13:01:24.240: INFO: Waiting for pod pod-configmaps-e7d356da-35af-446a-8951-0fe38e99d01c to disappear May 14 13:01:24.257: INFO: Pod pod-configmaps-e7d356da-35af-446a-8951-0fe38e99d01c no longer exists [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:01:24.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-9808" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":335,"completed":165,"skipped":3165,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould update labels on modification [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 12 lines ... May 14 13:01:26.500: INFO: The status of Pod labelsupdate9c0dbc48-51ab-4b32-a495-51aa7c391c61 is Running (Ready = true) May 14 13:01:27.099: INFO: Successfully updated pod "labelsupdate9c0dbc48-51ab-4b32-a495-51aa7c391c61" [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:01:31.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-5451" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":335,"completed":166,"skipped":3184,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould provide podname only [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 13:01:31.366: INFO: Waiting up to 5m0s for pod "downwardapi-volume-886dd425-a4fe-4e69-996a-2a4f1b68fbc8" in namespace "projected-4999" to be "Succeeded or Failed" May 14 13:01:31.387: INFO: Pod "downwardapi-volume-886dd425-a4fe-4e69-996a-2a4f1b68fbc8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.151266ms May 14 13:01:33.406: INFO: Pod "downwardapi-volume-886dd425-a4fe-4e69-996a-2a4f1b68fbc8": Phase="Running", Reason="", readiness=true. Elapsed: 2.039512202s May 14 13:01:35.424: INFO: Pod "downwardapi-volume-886dd425-a4fe-4e69-996a-2a4f1b68fbc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058453368s [1mSTEP[0m: Saw pod success May 14 13:01:35.425: INFO: Pod "downwardapi-volume-886dd425-a4fe-4e69-996a-2a4f1b68fbc8" satisfied condition "Succeeded or Failed" May 14 13:01:35.442: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod downwardapi-volume-886dd425-a4fe-4e69-996a-2a4f1b68fbc8 container client-container: <nil> [1mSTEP[0m: delete the pod May 14 13:01:35.504: INFO: Waiting for pod downwardapi-volume-886dd425-a4fe-4e69-996a-2a4f1b68fbc8 to disappear May 14 13:01:35.522: INFO: Pod downwardapi-volume-886dd425-a4fe-4e69-996a-2a4f1b68fbc8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:01:35.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-4999" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":335,"completed":167,"skipped":3222,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [90mWhen creating a container with runAsUser[0m [1mshould run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Security Context ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 May 14 13:01:35.710: INFO: Waiting up to 5m0s for pod "busybox-user-65534-6df1cae8-5788-4264-a35f-c0a16acbf7cf" in namespace "security-context-test-4885" to be "Succeeded or Failed" May 14 13:01:35.735: INFO: Pod "busybox-user-65534-6df1cae8-5788-4264-a35f-c0a16acbf7cf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.02762ms May 14 13:01:37.761: INFO: Pod "busybox-user-65534-6df1cae8-5788-4264-a35f-c0a16acbf7cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.051878008s May 14 13:01:37.761: INFO: Pod "busybox-user-65534-6df1cae8-5788-4264-a35f-c0a16acbf7cf" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:01:37.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-4885" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":168,"skipped":3247,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] KubeletManagedEtcHosts[0m [1mshould test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] KubeletManagedEtcHosts ... skipping 67 lines ... May 14 13:01:44.080: INFO: ExecWithOptions: execute(POST https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-5067/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) May 14 13:01:44.306: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:01:44.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "e2e-kubelet-etc-hosts-5067" for this suite. [32m•[0m{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":169,"skipped":3251,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mshould be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 14 lines ... May 14 13:02:37.197: INFO: Restart count of pod container-probe-7410/busybox-16e98335-7080-4a85-9b22-fc43779890d0 is now 1 (50.644353027s elapsed) [1mSTEP[0m: deleting the pod [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:02:37.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-probe-7410" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":335,"completed":170,"skipped":3388,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1mshould be consumable from pods in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name projected-configmap-test-volume-9f61b71d-dfe8-4eee-a5a0-52585b2a1201 [1mSTEP[0m: Creating a pod to test consume configMaps May 14 13:02:37.461: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d7de129-0bba-43d9-9e59-f0963fc2ed51" in namespace "projected-3273" to be "Succeeded or Failed" May 14 13:02:37.483: INFO: Pod "pod-projected-configmaps-9d7de129-0bba-43d9-9e59-f0963fc2ed51": Phase="Pending", Reason="", readiness=false. Elapsed: 21.697808ms May 14 13:02:39.503: INFO: Pod "pod-projected-configmaps-9d7de129-0bba-43d9-9e59-f0963fc2ed51": Phase="Running", Reason="", readiness=true. Elapsed: 2.042118835s May 14 13:02:41.523: INFO: Pod "pod-projected-configmaps-9d7de129-0bba-43d9-9e59-f0963fc2ed51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061312965s [1mSTEP[0m: Saw pod success May 14 13:02:41.523: INFO: Pod "pod-projected-configmaps-9d7de129-0bba-43d9-9e59-f0963fc2ed51" satisfied condition "Succeeded or Failed" May 14 13:02:41.540: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-projected-configmaps-9d7de129-0bba-43d9-9e59-f0963fc2ed51 container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 13:02:41.607: INFO: Waiting for pod pod-projected-configmaps-9d7de129-0bba-43d9-9e59-f0963fc2ed51 to disappear May 14 13:02:41.624: INFO: Pod pod-projected-configmaps-9d7de129-0bba-43d9-9e59-f0963fc2ed51 no longer exists [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:02:41.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-3273" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":335,"completed":171,"skipped":3418,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mwith readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 21 lines ... May 14 13:03:03.855: INFO: The status of Pod test-webserver-0351b740-14a3-4853-9de2-b6481a7bc017 is Running (Ready = true) May 14 13:03:03.872: INFO: Container started at 2022-05-14 13:02:42 +0000 UTC, pod became ready at 2022-05-14 13:03:01 +0000 UTC [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:03.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-probe-5348" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":335,"completed":172,"skipped":3455,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be immutable if `immutable` field is set [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 6 lines ... [It] should be immutable if `immutable` field is set [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:04.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-5782" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":335,"completed":173,"skipped":3456,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0666 on node default medium May 14 13:03:04.405: INFO: Waiting up to 5m0s for pod "pod-4c8e8860-8863-4d8a-a929-17940590fd20" in namespace "emptydir-4800" to be "Succeeded or Failed" May 14 13:03:04.432: INFO: Pod "pod-4c8e8860-8863-4d8a-a929-17940590fd20": Phase="Pending", Reason="", readiness=false. Elapsed: 26.902406ms May 14 13:03:06.450: INFO: Pod "pod-4c8e8860-8863-4d8a-a929-17940590fd20": Phase="Running", Reason="", readiness=true. Elapsed: 2.044750713s May 14 13:03:08.468: INFO: Pod "pod-4c8e8860-8863-4d8a-a929-17940590fd20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062758717s [1mSTEP[0m: Saw pod success May 14 13:03:08.468: INFO: Pod "pod-4c8e8860-8863-4d8a-a929-17940590fd20" satisfied condition "Succeeded or Failed" May 14 13:03:08.485: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-4c8e8860-8863-4d8a-a929-17940590fd20 container test-container: <nil> [1mSTEP[0m: delete the pod May 14 13:03:08.564: INFO: Waiting for pod pod-4c8e8860-8863-4d8a-a929-17940590fd20 to disappear May 14 13:03:08.582: INFO: Pod pod-4c8e8860-8863-4d8a-a929-17940590fd20 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:08.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-4800" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":174,"skipped":3466,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Lifecycle Hook[0m [90mwhen create a pod with lifecycle hook[0m [1mshould execute prestop http hook properly [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Container Lifecycle Hook ... skipping 22 lines ... May 14 13:03:17.014: INFO: Pod pod-with-prestop-http-hook no longer exists [1mSTEP[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:17.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-lifecycle-hook-251" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":335,"completed":175,"skipped":3488,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould provide container's memory request [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 13:03:17.243: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53080ff8-0bc0-4614-a995-41580a4bbbf7" in namespace "downward-api-5441" to be "Succeeded or Failed" May 14 13:03:17.262: INFO: Pod "downwardapi-volume-53080ff8-0bc0-4614-a995-41580a4bbbf7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.070381ms May 14 13:03:19.281: INFO: Pod "downwardapi-volume-53080ff8-0bc0-4614-a995-41580a4bbbf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.038178521s [1mSTEP[0m: Saw pod success May 14 13:03:19.281: INFO: Pod "downwardapi-volume-53080ff8-0bc0-4614-a995-41580a4bbbf7" satisfied condition "Succeeded or Failed" May 14 13:03:19.298: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod downwardapi-volume-53080ff8-0bc0-4614-a995-41580a4bbbf7 container client-container: <nil> [1mSTEP[0m: delete the pod May 14 13:03:19.371: INFO: Waiting for pod downwardapi-volume-53080ff8-0bc0-4614-a995-41580a4bbbf7 to disappear May 14 13:03:19.388: INFO: Pod downwardapi-volume-53080ff8-0bc0-4614-a995-41580a4bbbf7 no longer exists [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:19.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-5441" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":335,"completed":176,"skipped":3524,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 13:03:19.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-540de5ea-229e-4dee-a494-290fc1d4fc88" in namespace "projected-3858" to be "Succeeded or Failed" May 14 13:03:19.610: INFO: Pod "downwardapi-volume-540de5ea-229e-4dee-a494-290fc1d4fc88": Phase="Pending", Reason="", readiness=false. Elapsed: 17.962288ms May 14 13:03:21.627: INFO: Pod "downwardapi-volume-540de5ea-229e-4dee-a494-290fc1d4fc88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.03563723s [1mSTEP[0m: Saw pod success May 14 13:03:21.627: INFO: Pod "downwardapi-volume-540de5ea-229e-4dee-a494-290fc1d4fc88" satisfied condition "Succeeded or Failed" May 14 13:03:21.645: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod downwardapi-volume-540de5ea-229e-4dee-a494-290fc1d4fc88 container client-container: <nil> [1mSTEP[0m: delete the pod May 14 13:03:21.707: INFO: Waiting for pod downwardapi-volume-540de5ea-229e-4dee-a494-290fc1d4fc88 to disappear May 14 13:03:21.725: INFO: Pod downwardapi-volume-540de5ea-229e-4dee-a494-290fc1d4fc88 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:21.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-3858" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":177,"skipped":3527,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Aggregator[0m [1mShould be able to support the 1.17 Sample API Server using the current Aggregator [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Aggregator ... skipping 20 lines ... [AfterEach] [sig-api-machinery] Aggregator /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:31.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "aggregator-2483" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":335,"completed":178,"skipped":3543,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould run through the lifecycle of Pods and PodStatus [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 34 lines ... May 14 13:03:35.786: INFO: observed event type MODIFIED May 14 13:03:35.803: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:35.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-4025" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":335,"completed":179,"skipped":3569,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-auth] ServiceAccounts[0m [1mshould guarantee kube-root-ca.crt exist in any namespace [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-auth] ServiceAccounts ... skipping 13 lines ... [1mSTEP[0m: waiting for the root ca configmap reconciled May 14 13:03:37.069: INFO: Reconciled root ca configmap in namespace "svcaccounts-7231" [AfterEach] [sig-auth] ServiceAccounts /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:37.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-7231" for this suite. [32m•[0m{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":335,"completed":180,"skipped":3583,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Job[0m [1mshould adopt matching orphans and release non-matching pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Job ... skipping 21 lines ... May 14 13:03:42.448: INFO: Pod "adopt-release-bd24z": Phase="Running", Reason="", readiness=true. Elapsed: 25.780057ms May 14 13:03:42.448: INFO: Pod "adopt-release-bd24z" satisfied condition "released" [AfterEach] [sig-apps] Job /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:42.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "job-457" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":335,"completed":181,"skipped":3615,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Watchers[0m [1mshould be able to start watching from a specific resource version [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Watchers ... skipping 14 lines ... May 14 13:03:42.787: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1257 dc7561aa-6fec-443a-bf40-fff0dbb82b85 18997 0 2022-05-14 13:03:42 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-05-14 13:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 14 13:03:42.787: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1257 dc7561aa-6fec-443a-bf40-fff0dbb82b85 18998 0 2022-05-14 13:03:42 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-05-14 13:03:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:42.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-1257" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":335,"completed":182,"skipped":3650,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Discovery[0m [1mshould validate PreferredVersion for each APIGroup [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Discovery ... skipping 89 lines ... May 14 13:03:43.868: INFO: Versions found [{crd.projectcalico.org/v1 v1}] May 14 13:03:43.868: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 [AfterEach] [sig-api-machinery] Discovery /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:43.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "discovery-1729" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":335,"completed":183,"skipped":3665,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mworks for CRD with validation schema [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 43 lines ... May 14 13:03:52.640: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=crd-publish-openapi-4028 explain e2e-test-crd-publish-openapi-1275-crds.spec' May 14 13:03:52.863: INFO: stderr: "" May 14 13:03:52.863: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1275-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 14 13:03:52.863: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=crd-publish-openapi-4028 explain e2e-test-crd-publish-openapi-1275-crds.spec.bars' May 14 13:03:53.091: INFO: stderr: "" May 14 13:03:53.091: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1275-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" [1mSTEP[0m: kubectl explain works to return error when explain is called on property that doesn't exist May 14 13:03:53.092: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=crd-publish-openapi-4028 explain e2e-test-crd-publish-openapi-1275-crds.spec.bars2' May 14 13:03:53.315: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:56.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-4028" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":335,"completed":184,"skipped":3677,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] PodTemplates[0m [1mshould delete a collection of pod templates [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] PodTemplates ... skipping 15 lines ... [1mSTEP[0m: check that the list of pod templates matches the requested quantity May 14 13:03:57.212: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:57.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "podtemplate-2384" for this suite. [32m•[0m{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":335,"completed":185,"skipped":3679,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 13:03:57.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a488c29-f015-4730-97bf-cac9e0f29a7c" in namespace "downward-api-5555" to be "Succeeded or Failed" May 14 13:03:57.436: INFO: Pod "downwardapi-volume-5a488c29-f015-4730-97bf-cac9e0f29a7c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.498551ms May 14 13:03:59.455: INFO: Pod "downwardapi-volume-5a488c29-f015-4730-97bf-cac9e0f29a7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.041753066s [1mSTEP[0m: Saw pod success May 14 13:03:59.455: INFO: Pod "downwardapi-volume-5a488c29-f015-4730-97bf-cac9e0f29a7c" satisfied condition "Succeeded or Failed" May 14 13:03:59.473: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod downwardapi-volume-5a488c29-f015-4730-97bf-cac9e0f29a7c container client-container: <nil> [1mSTEP[0m: delete the pod May 14 13:03:59.539: INFO: Waiting for pod downwardapi-volume-5a488c29-f015-4730-97bf-cac9e0f29a7c to disappear May 14 13:03:59.556: INFO: Pod downwardapi-volume-5a488c29-f015-4730-97bf-cac9e0f29a7c no longer exists [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:03:59.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-5555" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":186,"skipped":3682,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould support retrieving logs from the container over websockets [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 13 lines ... May 14 13:03:59.759: INFO: The status of Pod pod-logs-websocket-73e0f5ec-ba82-4b15-b0aa-42fa2476a946 is Pending, waiting for it to be Running (with Ready = true) May 14 13:04:01.777: INFO: The status of Pod pod-logs-websocket-73e0f5ec-ba82-4b15-b0aa-42fa2476a946 is Running (Ready = true) [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:04:01.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-511" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":335,"completed":187,"skipped":3711,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mProxy server[0m [1mshould support proxy with --port 0 [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 11 lines ... May 14 13:04:02.100: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl kubectl --server=https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-2657 proxy -p 0 --disable-filter' [1mSTEP[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:04:02.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-2657" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":335,"completed":188,"skipped":3739,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Job[0m [1mshould delete a job [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Job ... skipping 13 lines ... May 14 13:04:04.780: INFO: Terminating Job.batch foo pods took: 101.122505ms [1mSTEP[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:04:37.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "job-1719" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":335,"completed":189,"skipped":3766,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould update annotations on modification [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 12 lines ... May 14 13:04:39.732: INFO: The status of Pod annotationupdatedede0001-5052-4669-9493-8c889f992546 is Running (Ready = true) May 14 13:04:40.327: INFO: Successfully updated pod "annotationupdatedede0001-5052-4669-9493-8c889f992546" [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:04:42.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-6839" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":335,"completed":190,"skipped":3810,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be immutable if `immutable` field is set [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 6 lines ... [It] should be immutable if `immutable` field is set [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:04:42.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-5218" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":335,"completed":191,"skipped":3832,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mon terminated container[0m [1mshould report termination message if TerminationMessagePath is set [Excluded:WindowsDocker] [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171[0m [BeforeEach] [sig-node] Container Runtime ... skipping 13 lines ... May 14 13:04:45.005: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:04:45.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-9408" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [Excluded:WindowsDocker] [NodeConformance]","total":335,"completed":192,"skipped":3844,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide DNS for pods for Hostname [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 19 lines ... [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test headless service [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:04:49.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-9413" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":335,"completed":193,"skipped":3868,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected configMap[0m [1mupdates should be reflected in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected configMap ... skipping 12 lines ... [1mSTEP[0m: Updating configmap projected-configmap-test-upd-77b97156-397b-4e84-a6c1-d88ebaa5c695 [1mSTEP[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:04:53.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-2250" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":194,"skipped":3877,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould orphan pods created by rc if delete options say so [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 135 lines ... May 14 13:05:39.163: INFO: Deleting pod "simpletest.rc-zwrcd" in namespace "gc-5456" May 14 13:05:39.207: INFO: Deleting pod "simpletest.rc-zzwdj" in namespace "gc-5456" [AfterEach] [sig-api-machinery] Garbage collector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:05:39.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-5456" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":335,"completed":195,"skipped":3890,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] InitContainer [NodeConformance][0m [1mshould invoke init containers on a RestartNever pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] ... skipping 10 lines ... [1mSTEP[0m: creating the pod May 14 13:05:39.416: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:05:57.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-8649" for this suite. [32m•[0m{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":335,"completed":196,"skipped":3968,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] EndpointSlice[0m [1mshould create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] EndpointSlice ... skipping 8 lines ... [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-network] EndpointSlice /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:05:57.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslice-8191" for this suite. [32m•[0m{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":335,"completed":197,"skipped":3978,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Deployment[0m [1mdeployment should support rollover [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Deployment ... skipping 44 lines ... May 14 13:06:22.071: INFO: Pod "test-rollover-deployment-668b7f667d-vxqqw" is available: &Pod{ObjectMeta:{test-rollover-deployment-668b7f667d-vxqqw test-rollover-deployment-668b7f667d- deployment-2417 f55d33c2-b89f-4f80-9541-e5066fcc7ffb 22191 0 2022-05-14 13:06:09 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668b7f667d] map[cni.projectcalico.org/containerID:1e62c8f6b3c543a854b7832229d10f41de835c9da881ff0ef1b72c266fe4721b cni.projectcalico.org/podIP:192.168.204.171/32 cni.projectcalico.org/podIPs:192.168.204.171/32] [{apps/v1 ReplicaSet test-rollover-deployment-668b7f667d 2cac2ced-7c50-49ea-89da-94c35744e32e 0xc0038ccb87 0xc0038ccb88}] [] [{kube-controller-manager Update v1 2022-05-14 13:06:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2cac2ced-7c50-49ea-89da-94c35744e32e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2022-05-14 13:06:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {Go-http-client Update v1 2022-05-14 13:06:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.204.171\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zj7hw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zj7hw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-scnhc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:06:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.204.171,StartTime:2022-05-14 13:06:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-14 13:06:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://1a6041a4c063794acceab3d88c15dc1dc0f40c1b1928fe1fa380ec1d55d77e87,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.204.171,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:06:22.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-2417" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":335,"completed":198,"skipped":3981,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] DisruptionController[0m [1mshould observe PodDisruptionBudget status updated [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] DisruptionController ... skipping 11 lines ... [1mSTEP[0m: Waiting for all pods to be running May 14 13:06:22.401: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:06:24.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-8229" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":335,"completed":199,"skipped":3986,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected secret[0m [1moptional updates should be reflected in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected secret ... skipping 16 lines ... [1mSTEP[0m: Creating secret with name s-test-opt-create-f263540e-8510-4095-a1fa-30ab686332ae [1mSTEP[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:07:37.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-1293" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":200,"skipped":3997,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould delete RS created by deployment when not orphaning [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 37 lines ... For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:07:39.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-3947" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":335,"completed":201,"skipped":4103,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected secret[0m [1mshould be consumable from pods in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected secret ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-d392c5cd-500d-4bc1-baaf-92d0a6ddf981 [1mSTEP[0m: Creating a pod to test consume secrets May 14 13:07:39.313: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-28469efd-1504-412f-b31e-78a97d6c192e" in namespace "projected-3045" to be "Succeeded or Failed" May 14 13:07:39.337: INFO: Pod "pod-projected-secrets-28469efd-1504-412f-b31e-78a97d6c192e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.135074ms May 14 13:07:41.354: INFO: Pod "pod-projected-secrets-28469efd-1504-412f-b31e-78a97d6c192e": Phase="Running", Reason="", readiness=true. Elapsed: 2.041672359s May 14 13:07:43.374: INFO: Pod "pod-projected-secrets-28469efd-1504-412f-b31e-78a97d6c192e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061144982s [1mSTEP[0m: Saw pod success May 14 13:07:43.374: INFO: Pod "pod-projected-secrets-28469efd-1504-412f-b31e-78a97d6c192e" satisfied condition "Succeeded or Failed" May 14 13:07:43.392: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-projected-secrets-28469efd-1504-412f-b31e-78a97d6c192e container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 13:07:43.471: INFO: Waiting for pod pod-projected-secrets-28469efd-1504-412f-b31e-78a97d6c192e to disappear May 14 13:07:43.487: INFO: Pod pod-projected-secrets-28469efd-1504-412f-b31e-78a97d6c192e no longer exists [AfterEach] [sig-storage] Projected secret /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:07:43.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-3045" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":335,"completed":202,"skipped":4134,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Sysctls [LinuxOnly] [NodeConformance][0m [1mshould not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:157[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] ... skipping 7 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:157 [1mSTEP[0m: Creating a pod with an ignorelisted, but not allowlisted sysctl on the node [1mSTEP[0m: Wait for pod failed reason [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:07:45.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sysctl-337" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":335,"completed":203,"skipped":4148,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Secrets[0m [1mshould be consumable from pods in env vars [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-9d122408-d0eb-4b3e-bdfd-0a1b5ddf52f4 [1mSTEP[0m: Creating a pod to test consume secrets May 14 13:07:45.950: INFO: Waiting up to 5m0s for pod "pod-secrets-59d121e7-5603-4b60-afc7-fc932889279d" in namespace "secrets-9005" to be "Succeeded or Failed" May 14 13:07:45.972: INFO: Pod "pod-secrets-59d121e7-5603-4b60-afc7-fc932889279d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.986959ms May 14 13:07:47.991: INFO: Pod "pod-secrets-59d121e7-5603-4b60-afc7-fc932889279d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040540823s [1mSTEP[0m: Saw pod success May 14 13:07:47.991: INFO: Pod "pod-secrets-59d121e7-5603-4b60-afc7-fc932889279d" satisfied condition "Succeeded or Failed" May 14 13:07:48.008: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-secrets-59d121e7-5603-4b60-afc7-fc932889279d container secret-env-test: <nil> [1mSTEP[0m: delete the pod May 14 13:07:48.071: INFO: Waiting for pod pod-secrets-59d121e7-5603-4b60-afc7-fc932889279d to disappear May 14 13:07:48.087: INFO: Pod pod-secrets-59d121e7-5603-4b60-afc7-fc932889279d no longer exists [AfterEach] [sig-node] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:07:48.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-9005" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":335,"completed":204,"skipped":4150,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mshould be restarted with a /healthz http liveness probe [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 14 lines ... May 14 13:08:10.533: INFO: Restart count of pod container-probe-6587/liveness-bd9f2615-5f17-4bab-aaf0-767c311add1f is now 1 (20.202992236s elapsed) [1mSTEP[0m: deleting the pod [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:08:10.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-probe-6587" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":335,"completed":205,"skipped":4167,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mGuestbook application[0m [1mshould create and stop a working application [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 191 lines ... May 14 13:08:20.744: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 13:08:20.744: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:08:20.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-1734" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":335,"completed":206,"skipped":4180,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Kubelet[0m [90mwhen scheduling a busybox Pod with hostAliases[0m [1mshould write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Kubelet ... skipping 10 lines ... May 14 13:08:20.970: INFO: The status of Pod busybox-host-aliases9d78dd78-a303-4508-8f2d-0e20fe75e186 is Pending, waiting for it to be Running (with Ready = true) May 14 13:08:22.996: INFO: The status of Pod busybox-host-aliases9d78dd78-a303-4508-8f2d-0e20fe75e186 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:08:23.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubelet-test-5566" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":207,"skipped":4185,"failed":0} [90m------------------------------[0m [0m[sig-storage] HostPath[0m [1mshould give a volume the correct mode [LinuxOnly] [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48[0m [BeforeEach] [sig-storage] HostPath ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 [1mSTEP[0m: Creating a pod to test hostPath mode May 14 13:08:23.222: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8598" to be "Succeeded or Failed" May 14 13:08:23.241: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.803307ms May 14 13:08:25.259: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036992891s May 14 13:08:27.278: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056322684s [1mSTEP[0m: Saw pod success May 14 13:08:27.278: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 14 13:08:27.295: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-host-path-test container test-container-1: <nil> [1mSTEP[0m: delete the pod May 14 13:08:27.415: INFO: Waiting for pod pod-host-path-test to disappear May 14 13:08:27.439: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:08:27.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "hostpath-8598" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":335,"completed":208,"skipped":4185,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Deployment[0m [1mRollingUpdateDeployment should delete old pods and create new ones [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Deployment ... skipping 28 lines ... May 14 13:08:35.945: INFO: Pod "test-rolling-update-deployment-796dbc4547-ht7tt" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-796dbc4547-ht7tt test-rolling-update-deployment-796dbc4547- deployment-79 336af1db-1f0e-4cdf-80bb-e5d3efd74037 23287 0 2022-05-14 13:08:31 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:796dbc4547] map[cni.projectcalico.org/containerID:cec67f2556787198501178d5f8d207ed526d8fc5fc97d43b60e0a0e6b80c2a32 cni.projectcalico.org/podIP:192.168.204.187/32 cni.projectcalico.org/podIPs:192.168.204.187/32] [{apps/v1 ReplicaSet test-rolling-update-deployment-796dbc4547 f1e8a90c-5dc6-4977-a9ea-b0ef7f7e5700 0xc00504cde7 0xc00504cde8}] [] [{kube-controller-manager Update v1 2022-05-14 13:08:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1e8a90c-5dc6-4977-a9ea-b0ef7f7e5700\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2022-05-14 13:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {Go-http-client Update v1 2022-05-14 13:08:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.204.187\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k7tn4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k7tn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-scnhc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:08:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:08:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:08:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:08:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.204.187,StartTime:2022-05-14 13:08:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-14 13:08:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://e0e614fb82a0fce01e0d1759b995c753a7031fdcda51befcbc5bd43bbaaacc9c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.204.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:08:35.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-79" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":335,"completed":209,"skipped":4215,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] EmptyDir volumes[0m [1mshould support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] EmptyDir volumes ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename emptydir [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test emptydir 0644 on tmpfs May 14 13:08:36.126: INFO: Waiting up to 5m0s for pod "pod-c256ab95-2640-4004-b5e8-ed4e5e9b71a1" in namespace "emptydir-2196" to be "Succeeded or Failed" May 14 13:08:36.152: INFO: Pod "pod-c256ab95-2640-4004-b5e8-ed4e5e9b71a1": Phase="Pending", Reason="", readiness=false. Elapsed: 25.667579ms May 14 13:08:38.170: INFO: Pod "pod-c256ab95-2640-4004-b5e8-ed4e5e9b71a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.043443928s [1mSTEP[0m: Saw pod success May 14 13:08:38.170: INFO: Pod "pod-c256ab95-2640-4004-b5e8-ed4e5e9b71a1" satisfied condition "Succeeded or Failed" May 14 13:08:38.186: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-c256ab95-2640-4004-b5e8-ed4e5e9b71a1 container test-container: <nil> [1mSTEP[0m: delete the pod May 14 13:08:38.258: INFO: Waiting for pod pod-c256ab95-2640-4004-b5e8-ed4e5e9b71a1 to disappear May 14 13:08:38.275: INFO: Pod pod-c256ab95-2640-4004-b5e8-ed4e5e9b71a1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:08:38.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "emptydir-2196" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":210,"skipped":4223,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] ResourceQuota[0m [1mshould create a ResourceQuota and capture the life of a replica set. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] ResourceQuota ... skipping 13 lines ... [1mSTEP[0m: Deleting a ReplicaSet [1mSTEP[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:08:49.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-6619" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":335,"completed":211,"skipped":4251,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Downward API[0m [1mshould provide host IP as an env var [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Downward API ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward api env vars May 14 13:08:49.815: INFO: Waiting up to 5m0s for pod "downward-api-48ce7cc4-842d-48cb-a02f-920795fd188f" in namespace "downward-api-5367" to be "Succeeded or Failed" May 14 13:08:49.837: INFO: Pod "downward-api-48ce7cc4-842d-48cb-a02f-920795fd188f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.229038ms May 14 13:08:51.855: INFO: Pod "downward-api-48ce7cc4-842d-48cb-a02f-920795fd188f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039538891s [1mSTEP[0m: Saw pod success May 14 13:08:51.855: INFO: Pod "downward-api-48ce7cc4-842d-48cb-a02f-920795fd188f" satisfied condition "Succeeded or Failed" May 14 13:08:51.872: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod downward-api-48ce7cc4-842d-48cb-a02f-920795fd188f container dapi-container: <nil> [1mSTEP[0m: delete the pod May 14 13:08:51.933: INFO: Waiting for pod downward-api-48ce7cc4-842d-48cb-a02f-920795fd188f to disappear May 14 13:08:51.966: INFO: Pod downward-api-48ce7cc4-842d-48cb-a02f-920795fd188f no longer exists [AfterEach] [sig-node] Downward API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:08:51.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-5367" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":335,"completed":212,"skipped":4259,"failed":0} [90m------------------------------[0m [0m[sig-storage] Projected downwardAPI[0m [1mshould provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected downwardAPI ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 13:08:52.156: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac94a270-9b6e-41d0-a07d-c5e63619da94" in namespace "projected-761" to be "Succeeded or Failed" May 14 13:08:52.188: INFO: Pod "downwardapi-volume-ac94a270-9b6e-41d0-a07d-c5e63619da94": Phase="Pending", Reason="", readiness=false. Elapsed: 32.113241ms May 14 13:08:54.206: INFO: Pod "downwardapi-volume-ac94a270-9b6e-41d0-a07d-c5e63619da94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.04960844s [1mSTEP[0m: Saw pod success May 14 13:08:54.206: INFO: Pod "downwardapi-volume-ac94a270-9b6e-41d0-a07d-c5e63619da94" satisfied condition "Succeeded or Failed" May 14 13:08:54.222: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod downwardapi-volume-ac94a270-9b6e-41d0-a07d-c5e63619da94 container client-container: <nil> [1mSTEP[0m: delete the pod May 14 13:08:54.294: INFO: Waiting for pod downwardapi-volume-ac94a270-9b6e-41d0-a07d-c5e63619da94 to disappear May 14 13:08:54.320: INFO: Pod downwardapi-volume-ac94a270-9b6e-41d0-a07d-c5e63619da94 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:08:54.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-761" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":335,"completed":213,"skipped":4259,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Docker Containers[0m [1mshould be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Docker Containers ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename containers [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test override arguments May 14 13:08:54.497: INFO: Waiting up to 5m0s for pod "client-containers-7d6c3da4-c870-4715-8ed0-25464f70adfd" in namespace "containers-2136" to be "Succeeded or Failed" May 14 13:08:54.519: INFO: Pod "client-containers-7d6c3da4-c870-4715-8ed0-25464f70adfd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.97456ms May 14 13:08:56.538: INFO: Pod "client-containers-7d6c3da4-c870-4715-8ed0-25464f70adfd": Phase="Running", Reason="", readiness=true. Elapsed: 2.040662225s May 14 13:08:58.556: INFO: Pod "client-containers-7d6c3da4-c870-4715-8ed0-25464f70adfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058949966s [1mSTEP[0m: Saw pod success May 14 13:08:58.556: INFO: Pod "client-containers-7d6c3da4-c870-4715-8ed0-25464f70adfd" satisfied condition "Succeeded or Failed" May 14 13:08:58.573: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod client-containers-7d6c3da4-c870-4715-8ed0-25464f70adfd container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 13:08:58.632: INFO: Waiting for pod client-containers-7d6c3da4-c870-4715-8ed0-25464f70adfd to disappear May 14 13:08:58.649: INFO: Pod client-containers-7d6c3da4-c870-4715-8ed0-25464f70adfd no longer exists [AfterEach] [sig-node] Docker Containers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:08:58.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "containers-2136" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":335,"completed":214,"skipped":4285,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin][0m [1mcustom resource defaulting for requests and from storage works [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 May 14 13:08:58.799: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:09:02.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "custom-resource-definition-6157" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":335,"completed":215,"skipped":4297,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] PrivilegedPod [NodeConformance][0m [1mshould enable privileged commands [LinuxOnly][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49[0m [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] ... skipping 23 lines ... May 14 13:09:05.029: INFO: ExecWithOptions: Clientset creation May 14 13:09:05.029: INFO: ExecWithOptions: execute(POST https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/e2e-privileged-pod-3356/pods/privileged-pod/exec?command=ip&command=link&command=add&command=dummy1&command=type&command=dummy&container=not-privileged-container&container=not-privileged-container&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:09:05.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "e2e-privileged-pod-3356" for this suite. [32m•[0m{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":335,"completed":216,"skipped":4301,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] CronJob[0m [1mshould replace jobs when ReplaceConcurrent [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] CronJob ... skipping 12 lines ... [1mSTEP[0m: Ensuring the job is replaced with a new one [1mSTEP[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:11:01.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "cronjob-258" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":335,"completed":217,"skipped":4329,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicationController[0m [1mshould serve a basic image on each replica with a public image [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicationController ... skipping 14 lines ... May 14 13:11:03.825: INFO: Trying to dial the pod May 14 13:11:08.881: INFO: Controller my-hostname-basic-b5ad8471-0f0a-4efc-85b7-b8200d3bb91e: Got expected result from replica 1 [my-hostname-basic-b5ad8471-0f0a-4efc-85b7-b8200d3bb91e-bbbx2]: "my-hostname-basic-b5ad8471-0f0a-4efc-85b7-b8200d3bb91e-bbbx2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:11:08.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-3137" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":335,"completed":218,"skipped":4349,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould test the lifecycle of an Endpoint [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 20 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:11:09.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-9389" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":335,"completed":219,"skipped":4377,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] StatefulSet[0m [90mBasic StatefulSet functionality [StatefulSetBasic][0m [1mshould list, patch and delete a collection of StatefulSets [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] StatefulSet ... skipping 24 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 May 14 13:11:29.756: INFO: Deleting all statefulset in ns statefulset-3343 [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:11:29.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "statefulset-3343" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":335,"completed":220,"skipped":4396,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicationController[0m [1mshould surface a failure condition on a common issue like exceeded quota [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicationController ... skipping 14 lines ... May 14 13:11:31.127: INFO: Updating replication controller "condition-test" [1mSTEP[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:11:31.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-4001" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":335,"completed":221,"skipped":4414,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mworks for multiple CRDs of same group and version but different kinds [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 9 lines ... May 14 13:11:31.324: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig May 14 13:11:35.402: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:11:50.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-9929" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":335,"completed":222,"skipped":4434,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [90mWhen creating a container with runAsUser[0m [1mshould run the container with uid 0 [LinuxOnly] [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99[0m [BeforeEach] [sig-node] Security Context ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 May 14 13:11:51.190: INFO: Waiting up to 5m0s for pod "busybox-user-0-a2e439b3-199d-4e9e-8df3-16d7f99484da" in namespace "security-context-test-2346" to be "Succeeded or Failed" May 14 13:11:51.210: INFO: Pod "busybox-user-0-a2e439b3-199d-4e9e-8df3-16d7f99484da": Phase="Pending", Reason="", readiness=false. Elapsed: 20.534848ms May 14 13:11:53.229: INFO: Pod "busybox-user-0-a2e439b3-199d-4e9e-8df3-16d7f99484da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.038867665s May 14 13:11:53.229: INFO: Pod "busybox-user-0-a2e439b3-199d-4e9e-8df3-16d7f99484da" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:11:53.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-2346" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":335,"completed":223,"skipped":4445,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mProxy server[0m [1mshould support --unix-socket=/path [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 11 lines ... May 14 13:11:53.385: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl kubectl --server=https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-1484 proxy --unix-socket=/tmp/kubectl-proxy-unix2870522514/test' [1mSTEP[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:11:53.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-1484" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":335,"completed":224,"skipped":4510,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mworks for CRD preserving unknown fields at the schema root [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 24 lines ... May 14 13:12:01.152: INFO: stderr: "" May 14 13:12:01.152: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7663-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:12:04.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-9324" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":335,"completed":225,"skipped":4541,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Docker Containers[0m [1mshould use the image defaults if command and args are blank [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Docker Containers ... skipping 6 lines ... [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Docker Containers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:12:07.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "containers-4001" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":335,"completed":226,"skipped":4616,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Watchers[0m [1mshould observe add, update, and delete watch notifications on configmaps [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Watchers ... skipping 27 lines ... May 14 13:12:17.409: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4510 e1edc047-b232-4fbd-a254-28266bfdf7d7 24577 0 2022-05-14 13:12:07 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-14 13:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 14 13:12:17.409: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4510 e1edc047-b232-4fbd-a254-28266bfdf7d7 24577 0 2022-05-14 13:12:07 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-14 13:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:12:27.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "watch-4510" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":335,"completed":227,"skipped":4673,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould mutate custom resource [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 22 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:12:34.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-3856" for this suite. [1mSTEP[0m: Destroying namespace "webhook-3856-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":335,"completed":228,"skipped":4675,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl logs[0m [1mshould be able to retrieve and filter logs [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 52 lines ... May 14 13:12:42.508: INFO: stderr: "" May 14 13:12:42.508: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:12:42.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-3994" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":335,"completed":229,"skipped":4705,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould find a service from listing all namespaces [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 11 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:12:42.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-357" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":335,"completed":230,"skipped":4712,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould set mode on item file [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 13:12:42.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7093dda8-1efe-4551-b035-f85a8feeec68" in namespace "downward-api-6648" to be "Succeeded or Failed" May 14 13:12:42.859: INFO: Pod "downwardapi-volume-7093dda8-1efe-4551-b035-f85a8feeec68": Phase="Pending", Reason="", readiness=false. Elapsed: 24.326483ms May 14 13:12:44.875: INFO: Pod "downwardapi-volume-7093dda8-1efe-4551-b035-f85a8feeec68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040388234s [1mSTEP[0m: Saw pod success May 14 13:12:44.876: INFO: Pod "downwardapi-volume-7093dda8-1efe-4551-b035-f85a8feeec68" satisfied condition "Succeeded or Failed" May 14 13:12:44.892: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod downwardapi-volume-7093dda8-1efe-4551-b035-f85a8feeec68 container client-container: <nil> [1mSTEP[0m: delete the pod May 14 13:12:44.974: INFO: Waiting for pod downwardapi-volume-7093dda8-1efe-4551-b035-f85a8feeec68 to disappear May 14 13:12:44.989: INFO: Pod downwardapi-volume-7093dda8-1efe-4551-b035-f85a8feeec68 no longer exists [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:12:44.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-6648" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":231,"skipped":4733,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould provide DNS for ExternalName services [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 27 lines ... [1mSTEP[0m: retrieving the pod [1mSTEP[0m: looking for the results for each expected name from probers May 14 13:12:53.476: INFO: File wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local from pod dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 13:12:53.491: INFO: File jessie_udp@dns-test-service-3.dns-4370.svc.cluster.local from pod dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 13:12:53.491: INFO: Lookups using dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d failed for: [wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local jessie_udp@dns-test-service-3.dns-4370.svc.cluster.local] May 14 13:12:58.511: INFO: File wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local from pod dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 13:12:58.527: INFO: File jessie_udp@dns-test-service-3.dns-4370.svc.cluster.local from pod dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 13:12:58.527: INFO: Lookups using dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d failed for: [wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local jessie_udp@dns-test-service-3.dns-4370.svc.cluster.local] May 14 13:13:03.508: INFO: File wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local from pod dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 13:13:03.523: INFO: File jessie_udp@dns-test-service-3.dns-4370.svc.cluster.local from pod dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 13:13:03.523: INFO: Lookups using dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d failed for: [wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local jessie_udp@dns-test-service-3.dns-4370.svc.cluster.local] May 14 13:13:08.509: INFO: File wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local from pod dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 13:13:08.525: INFO: File jessie_udp@dns-test-service-3.dns-4370.svc.cluster.local from pod dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 13:13:08.525: INFO: Lookups using dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d failed for: [wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local jessie_udp@dns-test-service-3.dns-4370.svc.cluster.local] May 14 13:13:13.508: INFO: File wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local from pod dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 13:13:13.524: INFO: File jessie_udp@dns-test-service-3.dns-4370.svc.cluster.local from pod dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d contains 'foo.example.com. ' instead of 'bar.example.com.' May 14 13:13:13.524: INFO: Lookups using dns-4370/dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d failed for: [wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local jessie_udp@dns-test-service-3.dns-4370.svc.cluster.local] May 14 13:13:18.526: INFO: DNS probes using dns-test-ad34096a-fe17-4d35-b5a7-6374104b6b5d succeeded [1mSTEP[0m: deleting the pod [1mSTEP[0m: changing the service to type=ClusterIP [1mSTEP[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4370.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4370.svc.cluster.local; sleep 1; done ... skipping 9 lines ... [1mSTEP[0m: deleting the pod [1mSTEP[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:13:22.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-4370" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":335,"completed":232,"skipped":4768,"failed":0} [36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] DNS[0m [1mshould support configurable pod DNS nameservers [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] DNS ... skipping 21 lines ... May 14 13:13:25.342: INFO: ExecWithOptions: execute(POST https://capz-05t52q-8c0f6c09.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/dns-5160/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) May 14 13:13:25.625: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:13:25.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "dns-5160" for this suite. [32m•[0m{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":335,"completed":233,"skipped":4770,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin][0m [1mworks for multiple CRDs of different groups [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] ... skipping 9 lines ... May 14 13:13:25.815: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig May 14 13:13:29.665: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:13:46.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "crd-publish-openapi-1185" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":335,"completed":234,"skipped":4788,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] InitContainer [NodeConformance][0m [1mshould not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 [1mSTEP[0m: Creating a kubernetes client May 14 13:13:46.667: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [1mSTEP[0m: Building a namespace api object, basename init-container [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: creating the pod May 14 13:13:46.767: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:13:49.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "init-container-2058" for this suite. [32m•[0m{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":335,"completed":235,"skipped":4795,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected secret[0m [1mshould be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90[0m [BeforeEach] [sig-storage] Projected secret ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-9088b7ef-0775-4842-a820-f5dc02ddb2b0 [1mSTEP[0m: Creating a pod to test consume secrets May 14 13:13:49.922: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6f302e0e-c0bc-4544-9440-69216c513c0b" in namespace "projected-2006" to be "Succeeded or Failed" May 14 13:13:49.942: INFO: Pod "pod-projected-secrets-6f302e0e-c0bc-4544-9440-69216c513c0b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.249982ms May 14 13:13:51.958: INFO: Pod "pod-projected-secrets-6f302e0e-c0bc-4544-9440-69216c513c0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036061868s [1mSTEP[0m: Saw pod success May 14 13:13:51.958: INFO: Pod "pod-projected-secrets-6f302e0e-c0bc-4544-9440-69216c513c0b" satisfied condition "Succeeded or Failed" May 14 13:13:51.976: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-projected-secrets-6f302e0e-c0bc-4544-9440-69216c513c0b container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 13:13:52.055: INFO: Waiting for pod pod-projected-secrets-6f302e0e-c0bc-4544-9440-69216c513c0b to disappear May 14 13:13:52.073: INFO: Pod pod-projected-secrets-6f302e0e-c0bc-4544-9440-69216c513c0b no longer exists [AfterEach] [sig-storage] Projected secret /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:13:52.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-2006" for this suite. [1mSTEP[0m: Destroying namespace "secret-namespace-6696" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":335,"completed":236,"skipped":4868,"failed":0} [90m------------------------------[0m [0m[sig-network] Ingress API[0m [1mshould support creating Ingress API operations [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Ingress API ... skipping 26 lines ... [1mSTEP[0m: deleting [1mSTEP[0m: deleting a collection [AfterEach] [sig-network] Ingress API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:13:52.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "ingress-1963" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":335,"completed":237,"skipped":4868,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] EndpointSliceMirroring[0m [1mshould mirror a custom Endpoints resource through create update and delete [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] EndpointSliceMirroring ... skipping 12 lines ... [1mSTEP[0m: mirroring an update to a custom Endpoint [1mSTEP[0m: mirroring deletion of a custom Endpoint [AfterEach] [sig-network] EndpointSliceMirroring /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:13:55.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "endpointslicemirroring-1768" for this suite. [32m•[0m{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":335,"completed":238,"skipped":4872,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Networking[0m [90mGranular Checks: Pods[0m [1mshould function for intra-pod communication: udp [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Networking ... skipping 39 lines ... May 14 13:14:17.992: INFO: reached 192.168.204.178 after 0/1 tries May 14 13:14:17.992: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:14:17.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pod-network-test-5602" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":335,"completed":239,"skipped":4897,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Variable Expansion[0m [1mshould allow substituting values in a container's args [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Variable Expansion ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test substitution in container's args May 14 13:14:18.168: INFO: Waiting up to 5m0s for pod "var-expansion-76d45e40-1c2b-42d4-a425-9415d9ea665b" in namespace "var-expansion-6632" to be "Succeeded or Failed" May 14 13:14:18.191: INFO: Pod "var-expansion-76d45e40-1c2b-42d4-a425-9415d9ea665b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.038762ms May 14 13:14:20.208: INFO: Pod "var-expansion-76d45e40-1c2b-42d4-a425-9415d9ea665b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040145182s [1mSTEP[0m: Saw pod success May 14 13:14:20.208: INFO: Pod "var-expansion-76d45e40-1c2b-42d4-a425-9415d9ea665b" satisfied condition "Succeeded or Failed" May 14 13:14:20.223: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod var-expansion-76d45e40-1c2b-42d4-a425-9415d9ea665b container dapi-container: <nil> [1mSTEP[0m: delete the pod May 14 13:14:20.287: INFO: Waiting for pod var-expansion-76d45e40-1c2b-42d4-a425-9415d9ea665b to disappear May 14 13:14:20.302: INFO: Pod var-expansion-76d45e40-1c2b-42d4-a425-9415d9ea665b no longer exists [AfterEach] [sig-node] Variable Expansion /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:14:20.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "var-expansion-6632" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":335,"completed":240,"skipped":4935,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Variable Expansion[0m [1mshould allow substituting values in a container's command [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Variable Expansion ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test substitution in container's command May 14 13:14:20.471: INFO: Waiting up to 5m0s for pod "var-expansion-7133a9ae-834e-4734-8556-873cebe3e875" in namespace "var-expansion-4627" to be "Succeeded or Failed" May 14 13:14:20.490: INFO: Pod "var-expansion-7133a9ae-834e-4734-8556-873cebe3e875": Phase="Pending", Reason="", readiness=false. Elapsed: 18.587528ms May 14 13:14:22.506: INFO: Pod "var-expansion-7133a9ae-834e-4734-8556-873cebe3e875": Phase="Running", Reason="", readiness=true. Elapsed: 2.034900891s May 14 13:14:24.522: INFO: Pod "var-expansion-7133a9ae-834e-4734-8556-873cebe3e875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050695216s [1mSTEP[0m: Saw pod success May 14 13:14:24.522: INFO: Pod "var-expansion-7133a9ae-834e-4734-8556-873cebe3e875" satisfied condition "Succeeded or Failed" May 14 13:14:24.540: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod var-expansion-7133a9ae-834e-4734-8556-873cebe3e875 container dapi-container: <nil> [1mSTEP[0m: delete the pod May 14 13:14:24.601: INFO: Waiting for pod var-expansion-7133a9ae-834e-4734-8556-873cebe3e875 to disappear May 14 13:14:24.626: INFO: Pod var-expansion-7133a9ae-834e-4734-8556-873cebe3e875 no longer exists [AfterEach] [sig-node] Variable Expansion /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:14:24.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "var-expansion-4627" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":335,"completed":241,"skipped":4965,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] ReplicationController[0m [1mshould adopt matching pods on creation [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] ReplicationController ... skipping 13 lines ... [1mSTEP[0m: When a replication controller with a matching selector is created [1mSTEP[0m: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:14:26.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "replication-controller-8024" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":335,"completed":242,"skipped":4974,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] ResourceQuota[0m [1mshould verify ResourceQuota with terminating scopes. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] ResourceQuota ... skipping 20 lines ... [1mSTEP[0m: Deleting the pod [1mSTEP[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:14:43.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-5160" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":335,"completed":243,"skipped":5001,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-cli] Kubectl client[0m [90mKubectl server-side dry-run[0m [1mshould check if kubectl can dry-run update Pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-cli] Kubectl client ... skipping 20 lines ... May 14 13:14:47.849: INFO: stderr: "" May 14 13:14:47.849: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:14:47.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "kubectl-3996" for this suite. [32m•[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":335,"completed":244,"skipped":5005,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] ResourceQuota[0m [1mshould create a ResourceQuota and capture the life of a service. [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] ResourceQuota ... skipping 15 lines ... [1mSTEP[0m: Deleting Services [1mSTEP[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:14:59.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "resourcequota-7382" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":335,"completed":245,"skipped":5028,"failed":0} [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 59 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:15:34.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-8868" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":335,"completed":246,"skipped":5028,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Container Runtime[0m [90mblackbox test[0m [0mwhen running a container with a new image[0m [1mshould not be able to pull from private registry without secret [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388[0m [BeforeEach] [sig-node] Container Runtime ... skipping 9 lines ... [1mSTEP[0m: check the container status [1mSTEP[0m: delete the container [AfterEach] [sig-node] Container Runtime /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:15:37.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "container-runtime-1101" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":335,"completed":247,"skipped":5037,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Secrets[0m [1mshould be consumable via the environment [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: creating secret secrets-9426/secret-test-1322be41-a084-4566-994c-60e5a4269f00 [1mSTEP[0m: Creating a pod to test consume secrets May 14 13:15:37.652: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a7ca7ac-25df-4a28-a628-f0adc9a3e2ce" in namespace "secrets-9426" to be "Succeeded or Failed" May 14 13:15:37.675: INFO: Pod "pod-configmaps-1a7ca7ac-25df-4a28-a628-f0adc9a3e2ce": Phase="Pending", Reason="", readiness=false. Elapsed: 23.037005ms May 14 13:15:39.691: INFO: Pod "pod-configmaps-1a7ca7ac-25df-4a28-a628-f0adc9a3e2ce": Phase="Running", Reason="", readiness=true. Elapsed: 2.039361637s May 14 13:15:41.709: INFO: Pod "pod-configmaps-1a7ca7ac-25df-4a28-a628-f0adc9a3e2ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056495474s [1mSTEP[0m: Saw pod success May 14 13:15:41.709: INFO: Pod "pod-configmaps-1a7ca7ac-25df-4a28-a628-f0adc9a3e2ce" satisfied condition "Succeeded or Failed" May 14 13:15:41.724: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-configmaps-1a7ca7ac-25df-4a28-a628-f0adc9a3e2ce container env-test: <nil> [1mSTEP[0m: delete the pod May 14 13:15:41.778: INFO: Waiting for pod pod-configmaps-1a7ca7ac-25df-4a28-a628-f0adc9a3e2ce to disappear May 14 13:15:41.794: INFO: Pod pod-configmaps-1a7ca7ac-25df-4a28-a628-f0adc9a3e2ce no longer exists [AfterEach] [sig-node] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:15:41.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-9426" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":335,"completed":248,"skipped":5040,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Downward API[0m [1mshould provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Downward API ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward api env vars May 14 13:15:41.968: INFO: Waiting up to 5m0s for pod "downward-api-d1c83ef9-e23c-4fdd-bf75-ffacf783feb9" in namespace "downward-api-436" to be "Succeeded or Failed" May 14 13:15:41.992: INFO: Pod "downward-api-d1c83ef9-e23c-4fdd-bf75-ffacf783feb9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.31378ms May 14 13:15:44.009: INFO: Pod "downward-api-d1c83ef9-e23c-4fdd-bf75-ffacf783feb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.04170827s [1mSTEP[0m: Saw pod success May 14 13:15:44.010: INFO: Pod "downward-api-d1c83ef9-e23c-4fdd-bf75-ffacf783feb9" satisfied condition "Succeeded or Failed" May 14 13:15:44.026: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod downward-api-d1c83ef9-e23c-4fdd-bf75-ffacf783feb9 container dapi-container: <nil> [1mSTEP[0m: delete the pod May 14 13:15:44.096: INFO: Waiting for pod downward-api-d1c83ef9-e23c-4fdd-bf75-ffacf783feb9 to disappear May 14 13:15:44.114: INFO: Pod downward-api-d1c83ef9-e23c-4fdd-bf75-ffacf783feb9 no longer exists [AfterEach] [sig-node] Downward API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:15:44.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-436" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":335,"completed":249,"skipped":5052,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] Deployment[0m [1mdeployment should support proportional scaling [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] Deployment ... skipping 45 lines ... &Pod{ObjectMeta:{webserver-deployment-566f96c878-85zbb webserver-deployment-566f96c878- deployment-7714 c5263cec-d534-446c-afd1-c5159ccebebd 26538 0 2022-05-14 13:15:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:681d054cf4c174ab44d186ba577b50ebc9974fb4fb18e3ce3ccc178207f45ba4 cni.projectcalico.org/podIP:192.168.92.88/32 cni.projectcalico.org/podIPs:192.168.92.88/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 02164330-8c0f-400b-bf14-15da0250662f 0xc005a1c9a0 0xc005a1c9a1}] [] [{kube-controller-manager Update v1 2022-05-14 13:15:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02164330-8c0f-400b-bf14-15da0250662f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2022-05-14 13:15:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-l62z5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l62z5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-dxhn8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 13:15:55.005: INFO: Pod "webserver-deployment-566f96c878-8mmd5" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-8mmd5 webserver-deployment-566f96c878- deployment-7714 9ad791fc-229b-4231-89cf-9b3b5407785f 26364 0 2022-05-14 13:15:50 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:fc3faf3f052b4862d3d11a314be28e18efbccf3dfd6c91874a79b2268990cd14 cni.projectcalico.org/podIP:192.168.204.180/32 cni.projectcalico.org/podIPs:192.168.204.180/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 02164330-8c0f-400b-bf14-15da0250662f 0xc005a1cb20 0xc005a1cb21}] [] [{Go-http-client Update v1 2022-05-14 13:15:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {kube-controller-manager Update v1 2022-05-14 13:15:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02164330-8c0f-400b-bf14-15da0250662f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2022-05-14 13:15:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xj8cz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xj8cz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-scnhc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2022-05-14 13:15:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 13:15:55.005: INFO: Pod "webserver-deployment-566f96c878-9cdsb" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-9cdsb webserver-deployment-566f96c878- deployment-7714 01e871fa-5add-4125-b32e-511252876ec8 26534 0 2022-05-14 13:15:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:90b6288558b9ebd19f73d597f50f092f4f5b980e12390bd9f348591d3402d0b6 cni.projectcalico.org/podIP:192.168.92.112/32 cni.projectcalico.org/podIPs:192.168.92.112/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 02164330-8c0f-400b-bf14-15da0250662f 0xc005a1cd10 0xc005a1cd11}] [] [{kube-controller-manager Update v1 2022-05-14 13:15:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02164330-8c0f-400b-bf14-15da0250662f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2022-05-14 13:15:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kt856,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kt856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-dxhn8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 13:15:55.006: INFO: Pod "webserver-deployment-566f96c878-c4sdx" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-c4sdx webserver-deployment-566f96c878- deployment-7714 6630bbaa-1385-4ba4-9970-4b032891ee60 26463 0 2022-05-14 13:15:50 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:8aa1c582682f65fddc9e85504d4f389cb33a69b68d5589bbda08908e3f7108de cni.projectcalico.org/podIP:192.168.92.108/32 cni.projectcalico.org/podIPs:192.168.92.108/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 02164330-8c0f-400b-bf14-15da0250662f 0xc005a1ce90 0xc005a1ce91}] [] [{kube-controller-manager Update v1 2022-05-14 13:15:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02164330-8c0f-400b-bf14-15da0250662f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2022-05-14 13:15:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {Go-http-client Update v1 2022-05-14 13:15:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.92.108\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zlbgv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zlbgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-dxhn8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.92.108,StartTime:2022-05-14 13:15:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.92.108,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 13:15:55.006: INFO: Pod "webserver-deployment-566f96c878-fx9cx" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-fx9cx webserver-deployment-566f96c878- deployment-7714 f401787b-119e-4a96-9c5c-1144d9cda0b2 26379 0 2022-05-14 13:15:50 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:dd68f4c5622a65ad551144bda98970f58d335a03f864c7d242b43f68448fec0c cni.projectcalico.org/podIP:192.168.204.162/32 cni.projectcalico.org/podIPs:192.168.204.162/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 02164330-8c0f-400b-bf14-15da0250662f 0xc005a1d0c0 0xc005a1d0c1}] [] [{Go-http-client Update v1 2022-05-14 13:15:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {kube-controller-manager Update v1 2022-05-14 13:15:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02164330-8c0f-400b-bf14-15da0250662f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2022-05-14 13:15:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nwh2c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwh2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-scnhc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2022-05-14 13:15:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 13:15:55.006: INFO: Pod "webserver-deployment-566f96c878-jsv29" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-jsv29 webserver-deployment-566f96c878- deployment-7714 02bca673-b86f-41f3-b396-9620e9de0183 26483 0 2022-05-14 13:15:50 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:c142288120a90786522bcf73634417705af65ecef67c5dfd0281624ca81403b9 cni.projectcalico.org/podIP:192.168.92.98/32 cni.projectcalico.org/podIPs:192.168.92.98/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 02164330-8c0f-400b-bf14-15da0250662f 0xc005a1d2f0 0xc005a1d2f1}] [] [{kube-controller-manager Update v1 2022-05-14 13:15:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02164330-8c0f-400b-bf14-15da0250662f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-05-14 13:15:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.92.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {calico Update v1 2022-05-14 13:15:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9x579,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9x579,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-dxhn8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.92.98,StartTime:2022-05-14 13:15:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.92.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 14 13:15:55.006: INFO: Pod "webserver-deployment-566f96c878-jwvcf" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-jwvcf webserver-deployment-566f96c878- deployment-7714 e07af90c-b89b-47f4-9be7-e0a1ba0ba051 26361 0 2022-05-14 13:15:50 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[cni.projectcalico.org/containerID:43ad5a6b892205ed85fb13950fd773d487f8fe249317872ac5fcabd2bb4efc1c cni.projectcalico.org/podIP:192.168.204.136/32 cni.projectcalico.org/podIPs:192.168.204.136/32] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 02164330-8c0f-400b-bf14-15da0250662f 0xc005a1d530 0xc005a1d531}] [] [{Go-http-client Update v1 2022-05-14 13:15:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {kube-controller-manager Update v1 2022-05-14 13:15:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02164330-8c0f-400b-bf14-15da0250662f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2022-05-14 13:15:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fp6sp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fp6sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-scnhc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2022-05-14 13:15:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} ... skipping 46 lines ... May 14 13:15:55.010: INFO: Pod "webserver-deployment-5d9fdcc779-wzz58" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-wzz58 webserver-deployment-5d9fdcc779- deployment-7714 2a953214-2d15-4425-be6e-c41bd119da70 26452 0 2022-05-14 13:15:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 a284a04a-8c14-4476-810e-5e7aebc533a6 0xc005aa3f70 0xc005aa3f71}] [] [{kube-controller-manager Update v1 2022-05-14 13:15:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a284a04a-8c14-4476-810e-5e7aebc533a6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jwd9z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwd9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-05t52q-md-0-dxhn8,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-14 13:15:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:15:55.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "deployment-7714" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":335,"completed":250,"skipped":5093,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould mutate configmap [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 24 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:05.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-7395" for this suite. [1mSTEP[0m: Destroying namespace "webhook-7395-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":335,"completed":251,"skipped":5096,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [90mwhen creating containers with AllowPrivilegeEscalation[0m [1mshould allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335[0m [BeforeEach] [sig-node] Security Context ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 May 14 13:16:05.474: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-d16c475c-cc83-4a69-bcea-2b6e6280ab94" in namespace "security-context-test-1943" to be "Succeeded or Failed" May 14 13:16:05.493: INFO: Pod "alpine-nnp-nil-d16c475c-cc83-4a69-bcea-2b6e6280ab94": Phase="Pending", Reason="", readiness=false. Elapsed: 18.391093ms May 14 13:16:07.509: INFO: Pod "alpine-nnp-nil-d16c475c-cc83-4a69-bcea-2b6e6280ab94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034645235s May 14 13:16:09.532: INFO: Pod "alpine-nnp-nil-d16c475c-cc83-4a69-bcea-2b6e6280ab94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057385298s May 14 13:16:11.547: INFO: Pod "alpine-nnp-nil-d16c475c-cc83-4a69-bcea-2b6e6280ab94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073188s May 14 13:16:13.567: INFO: Pod "alpine-nnp-nil-d16c475c-cc83-4a69-bcea-2b6e6280ab94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092555252s May 14 13:16:13.567: INFO: Pod "alpine-nnp-nil-d16c475c-cc83-4a69-bcea-2b6e6280ab94" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:13.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-1943" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":335,"completed":252,"skipped":5117,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Networking[0m [90mGranular Checks: Pods[0m [1mshould function for intra-pod communication: http [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Networking ... skipping 39 lines ... May 14 13:16:36.695: INFO: reached 192.168.204.142 after 0/1 tries May 14 13:16:36.695: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:36.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pod-network-test-8785" for this suite. [32m•[0m{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":335,"completed":253,"skipped":5123,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould delete a collection of services [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 16 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:37.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-7858" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":335,"completed":254,"skipped":5145,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Security Context[0m [90mWhen creating a pod with privileged[0m [1mshould run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Security Context ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 May 14 13:16:37.325: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-28098e65-d56e-42a7-a2bb-a650558fb1c6" in namespace "security-context-test-1719" to be "Succeeded or Failed" May 14 13:16:37.343: INFO: Pod "busybox-privileged-false-28098e65-d56e-42a7-a2bb-a650558fb1c6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.543231ms May 14 13:16:39.359: INFO: Pod "busybox-privileged-false-28098e65-d56e-42a7-a2bb-a650558fb1c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034392449s May 14 13:16:39.359: INFO: Pod "busybox-privileged-false-28098e65-d56e-42a7-a2bb-a650558fb1c6" satisfied condition "Succeeded or Failed" May 14 13:16:39.379: INFO: Got logs for pod "busybox-privileged-false-28098e65-d56e-42a7-a2bb-a650558fb1c6": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:39.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "security-context-test-1719" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":255,"skipped":5160,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Variable Expansion[0m [1mshould allow composing env vars into new env vars [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Variable Expansion ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename var-expansion [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test env composition May 14 13:16:39.549: INFO: Waiting up to 5m0s for pod "var-expansion-6b95d237-c0d4-4476-bb3f-c39aa2c0d4fd" in namespace "var-expansion-4605" to be "Succeeded or Failed" May 14 13:16:39.568: INFO: Pod "var-expansion-6b95d237-c0d4-4476-bb3f-c39aa2c0d4fd": Phase="Pending", Reason="", readiness=false. Elapsed: 19.200473ms May 14 13:16:41.585: INFO: Pod "var-expansion-6b95d237-c0d4-4476-bb3f-c39aa2c0d4fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.036259439s [1mSTEP[0m: Saw pod success May 14 13:16:41.585: INFO: Pod "var-expansion-6b95d237-c0d4-4476-bb3f-c39aa2c0d4fd" satisfied condition "Succeeded or Failed" May 14 13:16:41.600: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod var-expansion-6b95d237-c0d4-4476-bb3f-c39aa2c0d4fd container dapi-container: <nil> [1mSTEP[0m: delete the pod May 14 13:16:41.654: INFO: Waiting for pod var-expansion-6b95d237-c0d4-4476-bb3f-c39aa2c0d4fd to disappear May 14 13:16:41.669: INFO: Pod var-expansion-6b95d237-c0d4-4476-bb3f-c39aa2c0d4fd no longer exists [AfterEach] [sig-node] Variable Expansion /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:41.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "var-expansion-4605" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":335,"completed":256,"skipped":5182,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould be submitted and removed [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 16 lines ... [1mSTEP[0m: deleting the pod gracefully [1mSTEP[0m: verifying pod deletion was observed [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:46.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-4997" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":335,"completed":257,"skipped":5195,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be consumable from pods in volume as non-root [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name configmap-test-volume-9d0e7e52-3618-4d68-9214-1a655b21507a [1mSTEP[0m: Creating a pod to test consume configMaps May 14 13:16:46.747: INFO: Waiting up to 5m0s for pod "pod-configmaps-44e0ef1f-7767-4a68-8e30-a2d6051c7309" in namespace "configmap-5017" to be "Succeeded or Failed" May 14 13:16:46.766: INFO: Pod "pod-configmaps-44e0ef1f-7767-4a68-8e30-a2d6051c7309": Phase="Pending", Reason="", readiness=false. Elapsed: 19.663561ms May 14 13:16:48.783: INFO: Pod "pod-configmaps-44e0ef1f-7767-4a68-8e30-a2d6051c7309": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036364767s May 14 13:16:50.800: INFO: Pod "pod-configmaps-44e0ef1f-7767-4a68-8e30-a2d6051c7309": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053281435s [1mSTEP[0m: Saw pod success May 14 13:16:50.800: INFO: Pod "pod-configmaps-44e0ef1f-7767-4a68-8e30-a2d6051c7309" satisfied condition "Succeeded or Failed" May 14 13:16:50.816: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-configmaps-44e0ef1f-7767-4a68-8e30-a2d6051c7309 container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 13:16:50.870: INFO: Waiting for pod pod-configmaps-44e0ef1f-7767-4a68-8e30-a2d6051c7309 to disappear May 14 13:16:50.885: INFO: Pod pod-configmaps-44e0ef1f-7767-4a68-8e30-a2d6051c7309 no longer exists [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:50.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-5017" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":335,"completed":258,"skipped":5223,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Downward API[0m [1mshould provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Downward API ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename downward-api [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward api env vars May 14 13:16:51.055: INFO: Waiting up to 5m0s for pod "downward-api-2dfb9161-ff9e-4f31-9038-d8a70f52f7ad" in namespace "downward-api-4934" to be "Succeeded or Failed" May 14 13:16:51.075: INFO: Pod "downward-api-2dfb9161-ff9e-4f31-9038-d8a70f52f7ad": Phase="Pending", Reason="", readiness=false. Elapsed: 20.04506ms May 14 13:16:53.094: INFO: Pod "downward-api-2dfb9161-ff9e-4f31-9038-d8a70f52f7ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039203338s [1mSTEP[0m: Saw pod success May 14 13:16:53.095: INFO: Pod "downward-api-2dfb9161-ff9e-4f31-9038-d8a70f52f7ad" satisfied condition "Succeeded or Failed" May 14 13:16:53.118: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod downward-api-2dfb9161-ff9e-4f31-9038-d8a70f52f7ad container dapi-container: <nil> [1mSTEP[0m: delete the pod May 14 13:16:53.173: INFO: Waiting for pod downward-api-2dfb9161-ff9e-4f31-9038-d8a70f52f7ad to disappear May 14 13:16:53.188: INFO: Pod downward-api-2dfb9161-ff9e-4f31-9038-d8a70f52f7ad no longer exists [AfterEach] [sig-node] Downward API /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:53.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-4934" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":335,"completed":259,"skipped":5306,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-5ee1956e-b548-4ed0-b079-3632ca1b1a2d [1mSTEP[0m: Creating a pod to test consume secrets May 14 13:16:53.387: INFO: Waiting up to 5m0s for pod "pod-secrets-dd1ed191-297a-46aa-854b-e184d7170a60" in namespace "secrets-6540" to be "Succeeded or Failed" May 14 13:16:53.410: INFO: Pod "pod-secrets-dd1ed191-297a-46aa-854b-e184d7170a60": Phase="Pending", Reason="", readiness=false. Elapsed: 23.115796ms May 14 13:16:55.427: INFO: Pod "pod-secrets-dd1ed191-297a-46aa-854b-e184d7170a60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039868934s [1mSTEP[0m: Saw pod success May 14 13:16:55.427: INFO: Pod "pod-secrets-dd1ed191-297a-46aa-854b-e184d7170a60" satisfied condition "Succeeded or Failed" May 14 13:16:55.442: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-secrets-dd1ed191-297a-46aa-854b-e184d7170a60 container secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 13:16:55.501: INFO: Waiting for pod pod-secrets-dd1ed191-297a-46aa-854b-e184d7170a60 to disappear May 14 13:16:55.516: INFO: Pod pod-secrets-dd1ed191-297a-46aa-854b-e184d7170a60 no longer exists [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:55.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-6540" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":260,"skipped":5342,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould not be able to mutate or prevent deletion of webhook configuration objects [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 25 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:16:59.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-6927" for this suite. [1mSTEP[0m: Destroying namespace "webhook-6927-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":335,"completed":261,"skipped":5361,"failed":0} [36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-apps] DisruptionController[0m [1mshould update/patch PodDisruptionBudget status [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] DisruptionController ... skipping 17 lines ... [1mSTEP[0m: Patching PodDisruptionBudget status [1mSTEP[0m: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:17:04.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "disruption-7506" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":335,"completed":262,"skipped":5364,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Subpath[0m [90mAtomic writer volumes[0m [1mshould support subpaths with downward pod [Excluded:WindowsDocker] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Subpath ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 [1mSTEP[0m: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating pod pod-subpath-test-downwardapi-7rnm [1mSTEP[0m: Creating a pod to test atomic-volume-subpath May 14 13:17:04.307: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7rnm" in namespace "subpath-8696" to be "Succeeded or Failed" May 14 13:17:04.336: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Pending", Reason="", readiness=false. Elapsed: 28.719972ms May 14 13:17:06.353: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Running", Reason="", readiness=true. Elapsed: 2.045617521s May 14 13:17:08.369: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Running", Reason="", readiness=true. Elapsed: 4.061271726s May 14 13:17:10.387: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Running", Reason="", readiness=true. Elapsed: 6.079301211s May 14 13:17:12.404: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Running", Reason="", readiness=true. Elapsed: 8.096575801s May 14 13:17:14.424: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Running", Reason="", readiness=true. Elapsed: 10.116897797s May 14 13:17:16.441: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Running", Reason="", readiness=true. Elapsed: 12.134163412s May 14 13:17:18.458: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Running", Reason="", readiness=true. Elapsed: 14.150819875s May 14 13:17:20.476: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Running", Reason="", readiness=true. Elapsed: 16.168274542s May 14 13:17:22.492: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Running", Reason="", readiness=true. Elapsed: 18.185215196s May 14 13:17:24.509: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Running", Reason="", readiness=true. Elapsed: 20.201294319s May 14 13:17:26.524: INFO: Pod "pod-subpath-test-downwardapi-7rnm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.217186203s [1mSTEP[0m: Saw pod success May 14 13:17:26.524: INFO: Pod "pod-subpath-test-downwardapi-7rnm" satisfied condition "Succeeded or Failed" May 14 13:17:26.540: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-subpath-test-downwardapi-7rnm container test-container-subpath-downwardapi-7rnm: <nil> [1mSTEP[0m: delete the pod May 14 13:17:26.604: INFO: Waiting for pod pod-subpath-test-downwardapi-7rnm to disappear May 14 13:17:26.621: INFO: Pod pod-subpath-test-downwardapi-7rnm no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-downwardapi-7rnm May 14 13:17:26.621: INFO: Deleting pod "pod-subpath-test-downwardapi-7rnm" in namespace "subpath-8696" [AfterEach] [sig-storage] Subpath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:17:26.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "subpath-8696" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":335,"completed":263,"skipped":5372,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] ConfigMap[0m [1mshould be consumable from pods in volume [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] ConfigMap ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating configMap with name configmap-test-volume-882745f0-b864-45f5-bbd6-52fb4c40bd1c [1mSTEP[0m: Creating a pod to test consume configMaps May 14 13:17:26.826: INFO: Waiting up to 5m0s for pod "pod-configmaps-c36464c5-7d63-45ba-8600-2e6e66d5b0a4" in namespace "configmap-5341" to be "Succeeded or Failed" May 14 13:17:26.846: INFO: Pod "pod-configmaps-c36464c5-7d63-45ba-8600-2e6e66d5b0a4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.892735ms May 14 13:17:28.862: INFO: Pod "pod-configmaps-c36464c5-7d63-45ba-8600-2e6e66d5b0a4": Phase="Running", Reason="", readiness=true. Elapsed: 2.036259755s May 14 13:17:30.879: INFO: Pod "pod-configmaps-c36464c5-7d63-45ba-8600-2e6e66d5b0a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052488869s [1mSTEP[0m: Saw pod success May 14 13:17:30.879: INFO: Pod "pod-configmaps-c36464c5-7d63-45ba-8600-2e6e66d5b0a4" satisfied condition "Succeeded or Failed" May 14 13:17:30.894: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-configmaps-c36464c5-7d63-45ba-8600-2e6e66d5b0a4 container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 13:17:30.947: INFO: Waiting for pod pod-configmaps-c36464c5-7d63-45ba-8600-2e6e66d5b0a4 to disappear May 14 13:17:30.961: INFO: Pod pod-configmaps-c36464c5-7d63-45ba-8600-2e6e66d5b0a4 no longer exists [AfterEach] [sig-storage] ConfigMap /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:17:30.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "configmap-5341" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":335,"completed":264,"skipped":5397,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-network] Services[0m [1mshould be able to change the type from ExternalName to ClusterIP [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-network] Services ... skipping 24 lines ... [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:17:41.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "services-6796" for this suite. [AfterEach] [sig-network] Services /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 [32m•[0m{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":335,"completed":265,"skipped":5409,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-auth] ServiceAccounts[0m [1mshould mount projected service account token [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-auth] ServiceAccounts ... skipping 3 lines ... [1mSTEP[0m: Building a namespace api object, basename svcaccounts [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount projected service account token [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test service account token: May 14 13:17:41.507: INFO: Waiting up to 5m0s for pod "test-pod-d1b277f4-6549-4b34-8395-f827e86ba57b" in namespace "svcaccounts-4878" to be "Succeeded or Failed" May 14 13:17:41.543: INFO: Pod "test-pod-d1b277f4-6549-4b34-8395-f827e86ba57b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.307526ms May 14 13:17:43.561: INFO: Pod "test-pod-d1b277f4-6549-4b34-8395-f827e86ba57b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.053590865s [1mSTEP[0m: Saw pod success May 14 13:17:43.561: INFO: Pod "test-pod-d1b277f4-6549-4b34-8395-f827e86ba57b" satisfied condition "Succeeded or Failed" May 14 13:17:43.576: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod test-pod-d1b277f4-6549-4b34-8395-f827e86ba57b container agnhost-container: <nil> [1mSTEP[0m: delete the pod May 14 13:17:43.646: INFO: Waiting for pod test-pod-d1b277f4-6549-4b34-8395-f827e86ba57b to disappear May 14 13:17:43.662: INFO: Pod test-pod-d1b277f4-6549-4b34-8395-f827e86ba57b no longer exists [AfterEach] [sig-auth] ServiceAccounts /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:17:43.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "svcaccounts-4878" for this suite. [32m•[0m{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":335,"completed":266,"skipped":5445,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-storage] Secrets[0m [1mshould be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Secrets ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name secret-test-c8be7244-618d-4a96-ba52-ee0dc0ab80af [1mSTEP[0m: Creating a pod to test consume secrets May 14 13:17:43.922: INFO: Waiting up to 5m0s for pod "pod-secrets-a06c43ad-936e-49bd-ae7d-7a6fdab67f2c" in namespace "secrets-7888" to be "Succeeded or Failed" May 14 13:17:43.942: INFO: Pod "pod-secrets-a06c43ad-936e-49bd-ae7d-7a6fdab67f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.918792ms May 14 13:17:45.958: INFO: Pod "pod-secrets-a06c43ad-936e-49bd-ae7d-7a6fdab67f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036014766s May 14 13:17:47.975: INFO: Pod "pod-secrets-a06c43ad-936e-49bd-ae7d-7a6fdab67f2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052388383s [1mSTEP[0m: Saw pod success May 14 13:17:47.975: INFO: Pod "pod-secrets-a06c43ad-936e-49bd-ae7d-7a6fdab67f2c" satisfied condition "Succeeded or Failed" May 14 13:17:47.990: INFO: Trying to get logs from node capz-05t52q-md-0-scnhc pod pod-secrets-a06c43ad-936e-49bd-ae7d-7a6fdab67f2c container secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 13:17:48.046: INFO: Waiting for pod pod-secrets-a06c43ad-936e-49bd-ae7d-7a6fdab67f2c to disappear May 14 13:17:48.068: INFO: Pod pod-secrets-a06c43ad-936e-49bd-ae7d-7a6fdab67f2c no longer exists [AfterEach] [sig-storage] Secrets /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:17:48.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "secrets-7888" for this suite. [1mSTEP[0m: Destroying namespace "secret-namespace-4827" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":335,"completed":267,"skipped":5446,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 14 lines ... [1mSTEP[0m: verifying the pod is in kubernetes [1mSTEP[0m: updating the pod May 14 13:17:50.913: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6e752b63-04c3-4b90-93eb-056564a92aad" May 14 13:17:50.913: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6e752b63-04c3-4b90-93eb-056564a92aad" in namespace "pods-8462" to be "terminated due to deadline exceeded" May 14 13:17:50.928: INFO: Pod "pod-update-activedeadlineseconds-6e752b63-04c3-4b90-93eb-056564a92aad": Phase="Running", Reason="", readiness=true. Elapsed: 15.100466ms May 14 13:17:52.944: INFO: Pod "pod-update-activedeadlineseconds-6e752b63-04c3-4b90-93eb-056564a92aad": Phase="Running", Reason="", readiness=true. Elapsed: 2.03113617s May 14 13:17:54.961: INFO: Pod "pod-update-activedeadlineseconds-6e752b63-04c3-4b90-93eb-056564a92aad": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 4.047455986s May 14 13:17:54.961: INFO: Pod "pod-update-activedeadlineseconds-6e752b63-04c3-4b90-93eb-056564a92aad" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:17:54.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-8462" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":335,"completed":268,"skipped":5487,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould be updated [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 18 lines ... [1mSTEP[0m: verifying the updated pod is in kubernetes May 14 13:17:59.770: INFO: Pod update OK [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:17:59.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-8010" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":335,"completed":269,"skipped":5502,"failed":0} [90m------------------------------[0m [0m[sig-storage] Downward API volume[0m [1mshould provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Downward API volume ... skipping 5 lines ... [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod to test downward API volume plugin May 14 13:17:59.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0aa0949-d702-4f4d-92e7-67869e03342f" in namespace "downward-api-6973" to be "Succeeded or Failed" May 14 13:17:59.966: INFO: Pod "downwardapi-volume-a0aa0949-d702-4f4d-92e7-67869e03342f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.599822ms May 14 13:18:01.983: INFO: Pod "downwardapi-volume-a0aa0949-d702-4f4d-92e7-67869e03342f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0401239s [1mSTEP[0m: Saw pod success May 14 13:18:01.983: INFO: Pod "downwardapi-volume-a0aa0949-d702-4f4d-92e7-67869e03342f" satisfied condition "Succeeded or Failed" May 14 13:18:01.999: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod downwardapi-volume-a0aa0949-d702-4f4d-92e7-67869e03342f container client-container: <nil> [1mSTEP[0m: delete the pod May 14 13:18:02.073: INFO: Waiting for pod downwardapi-volume-a0aa0949-d702-4f4d-92e7-67869e03342f to disappear May 14 13:18:02.089: INFO: Pod downwardapi-volume-a0aa0949-d702-4f4d-92e7-67869e03342f no longer exists [AfterEach] [sig-storage] Downward API volume /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:18:02.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "downward-api-6973" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":335,"completed":270,"skipped":5502,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould delete a collection of pods [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 20 lines ... May 14 13:18:09.556: INFO: Pod quantity 3 is different from expected quantity 0 May 14 13:18:10.555: INFO: Pod quantity 2 is different from expected quantity 0 [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:18:11.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-3676" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":335,"completed":271,"skipped":5530,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Sysctls [LinuxOnly] [NodeConformance][0m [1mshould reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] ... skipping 11 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:18:11.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "sysctl-6024" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":335,"completed":272,"skipped":5542,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected secret[0m [1mshould be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected secret ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating projection with secret that has name projected-secret-test-7b794e02-6f27-4bd5-879f-a80fb9ad15a7 [1mSTEP[0m: Creating a pod to test consume secrets May 14 13:18:11.915: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-260ac347-203c-41d6-a92e-837a42628d4e" in namespace "projected-8030" to be "Succeeded or Failed" May 14 13:18:11.934: INFO: Pod "pod-projected-secrets-260ac347-203c-41d6-a92e-837a42628d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.78739ms May 14 13:18:13.951: INFO: Pod "pod-projected-secrets-260ac347-203c-41d6-a92e-837a42628d4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.035966852s [1mSTEP[0m: Saw pod success May 14 13:18:13.951: INFO: Pod "pod-projected-secrets-260ac347-203c-41d6-a92e-837a42628d4e" satisfied condition "Succeeded or Failed" May 14 13:18:13.966: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-projected-secrets-260ac347-203c-41d6-a92e-837a42628d4e container projected-secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 13:18:14.038: INFO: Waiting for pod pod-projected-secrets-260ac347-203c-41d6-a92e-837a42628d4e to disappear May 14 13:18:14.052: INFO: Pod pod-projected-secrets-260ac347-203c-41d6-a92e-837a42628d4e no longer exists [AfterEach] [sig-storage] Projected secret /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:18:14.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-8030" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":273,"skipped":5550,"failed":0} [36mS[0m [90m------------------------------[0m [0m[sig-apps] StatefulSet[0m [90mBasic StatefulSet functionality [StatefulSetBasic][0m [1mshould perform canary updates and phased rolling updates of template modifications [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-apps] StatefulSet ... skipping 46 lines ... May 14 13:19:45.327: INFO: Waiting for statefulset status.replicas updated to 0 May 14 13:19:45.343: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:19:45.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "statefulset-9866" for this suite. [32m•[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":335,"completed":274,"skipped":5551,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mpatching/updating a mutating webhook should work [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 24 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:19:49.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-4622" for this suite. [1mSTEP[0m: Destroying namespace "webhook-4622-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":335,"completed":275,"skipped":5558,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 89 lines ... May 14 13:20:08.818: INFO: Deleting pod "simpletest-rc-to-be-deleted-hkgkj" in namespace "gc-1632" May 14 13:20:08.857: INFO: Deleting pod "simpletest-rc-to-be-deleted-jcgbt" in namespace "gc-1632" [AfterEach] [sig-api-machinery] Garbage collector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:20:08.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-1632" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":335,"completed":276,"skipped":5583,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] Garbage collector[0m [1mshould delete pods created by rc when not orphaning [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] Garbage collector ... skipping 34 lines ... For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:20:19.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "gc-214" for this suite. [32m•[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":335,"completed":277,"skipped":5587,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mlisting mutating webhooks should work [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 28 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:20:34.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-7022" for this suite. [1mSTEP[0m: Destroying namespace "webhook-7022-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":335,"completed":278,"skipped":5639,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould deny crd creation [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 22 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:20:38.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-5193" for this suite. [1mSTEP[0m: Destroying namespace "webhook-5193-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":335,"completed":279,"skipped":5713,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin][0m [1mshould include webhook resources in discovery documents [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] ... skipping 26 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:20:42.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "webhook-3627" for this suite. [1mSTEP[0m: Destroying namespace "webhook-3627-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 [32m•[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":335,"completed":280,"skipped":5742,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Pods[0m [1mshould contain environment variables for services [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Pods ... skipping 7 lines ... /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should contain environment variables for services [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 May 14 13:20:42.907: INFO: The status of Pod server-envvars-c52ca256-00d0-4b28-ae75-628c829a0755 is Pending, waiting for it to be Running (with Ready = true) May 14 13:20:44.925: INFO: The status of Pod server-envvars-c52ca256-00d0-4b28-ae75-628c829a0755 is Pending, waiting for it to be Running (with Ready = true) May 14 13:20:46.924: INFO: The status of Pod server-envvars-c52ca256-00d0-4b28-ae75-628c829a0755 is Running (Ready = true) May 14 13:20:47.003: INFO: Waiting up to 5m0s for pod "client-envvars-6e866109-813c-4e56-bbba-de41cbfe6ba1" in namespace "pods-3774" to be "Succeeded or Failed" May 14 13:20:47.020: INFO: Pod "client-envvars-6e866109-813c-4e56-bbba-de41cbfe6ba1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.205688ms May 14 13:20:49.036: INFO: Pod "client-envvars-6e866109-813c-4e56-bbba-de41cbfe6ba1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.03301602s [1mSTEP[0m: Saw pod success May 14 13:20:49.036: INFO: Pod "client-envvars-6e866109-813c-4e56-bbba-de41cbfe6ba1" satisfied condition "Succeeded or Failed" May 14 13:20:49.052: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod client-envvars-6e866109-813c-4e56-bbba-de41cbfe6ba1 container env3cont: <nil> [1mSTEP[0m: delete the pod May 14 13:20:49.125: INFO: Waiting for pod client-envvars-6e866109-813c-4e56-bbba-de41cbfe6ba1 to disappear May 14 13:20:49.140: INFO: Pod client-envvars-6e866109-813c-4e56-bbba-de41cbfe6ba1 no longer exists [AfterEach] [sig-node] Pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:20:49.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "pods-3774" for this suite. [32m•[0m{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":335,"completed":281,"skipped":5800,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-storage] Projected secret[0m [1mshould be consumable in multiple volumes in a pod [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-storage] Projected secret ... skipping 4 lines ... [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating secret with name projected-secret-test-9184c1bd-4965-4d23-ba8f-8459730862a9 [1mSTEP[0m: Creating a pod to test consume secrets May 14 13:20:49.335: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fb3b9a4b-ced2-4807-a959-9755bbe7c1ee" in namespace "projected-8413" to be "Succeeded or Failed" May 14 13:20:49.353: INFO: Pod "pod-projected-secrets-fb3b9a4b-ced2-4807-a959-9755bbe7c1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 18.755392ms May 14 13:20:51.370: INFO: Pod "pod-projected-secrets-fb3b9a4b-ced2-4807-a959-9755bbe7c1ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.035285061s [1mSTEP[0m: Saw pod success May 14 13:20:51.370: INFO: Pod "pod-projected-secrets-fb3b9a4b-ced2-4807-a959-9755bbe7c1ee" satisfied condition "Succeeded or Failed" May 14 13:20:51.385: INFO: Trying to get logs from node capz-05t52q-md-0-dxhn8 pod pod-projected-secrets-fb3b9a4b-ced2-4807-a959-9755bbe7c1ee container secret-volume-test: <nil> [1mSTEP[0m: delete the pod May 14 13:20:51.470: INFO: Waiting for pod pod-projected-secrets-fb3b9a4b-ced2-4807-a959-9755bbe7c1ee to disappear May 14 13:20:51.486: INFO: Pod pod-projected-secrets-fb3b9a4b-ced2-4807-a959-9755bbe7c1ee no longer exists [AfterEach] [sig-storage] Projected secret /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 13:20:51.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [1mSTEP[0m: Destroying namespace "projected-8413" for this suite. [32m•[0m{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":335,"completed":282,"skipped":5851,"failed":0} [36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m[36mS[0m [90m------------------------------[0m [0m[sig-node] Probing container[0m [1mshould *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance][0m [37m/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633[0m [BeforeEach] [sig-node] Probing container ... skipping 8 lines ... [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [1mSTEP[0m: Creating pod test-webserver-e7db543c-3844-44d3-8ce6-183f9658687f in namespace container-probe-6202 May 14 13:20:53.700: INFO: Started pod test-webserver-e7db543c-3844-44d3-8ce6-183f9658687f in namespace container-probe-6202 [1mSTEP[0m: checking the pod's current state and verifying that restartCount is present May 14 13:20:53.715: INFO: Initial restart count of pod test-webserver-e7db543c-3844-44d3-8ce6-183f9658687f is 0 {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2022-05-14T13:21:38Z"} ++ early_exit_handler ++ '[' -n 160 ']' ++ kill -TERM 160 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 4 lines ...