This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjwtty: test: Add e2e tests for private link service integration
ResultABORTED
Tests 0 failed / 0 succeeded
Started2022-05-11 08:16
Elapsed1h1m
Revision355afe5b7515850c34b63d6a84f834bab2e052db
Refs 1692

No Test Failures!


Error lines from build-log.txt

... skipping 69 lines ...
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
/home/prow/go/src/sigs.k8s.io/cloud-provider-azure /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure
Image Tag is 6a1a3bc
Error response from daemon: manifest for capzci.azurecr.io/azure-cloud-controller-manager:6a1a3bc not found: manifest unknown: manifest tagged by "6a1a3bc" is not found
Build Linux Azure amd64 cloud controller manager
make: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure'
make ARCH=amd64 build-ccm-image
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure'
docker buildx inspect img-builder > /dev/null || docker buildx create --name img-builder --use
error: no builder "img-builder" found
img-builder
# enable qemu for arm64 build
# https://github.com/docker/buildx/issues/464#issuecomment-741507760
docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64
Unable to find image 'tonistiigi/binfmt:latest' locally
latest: Pulling from tonistiigi/binfmt
... skipping 1271 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-ips2qf-kubeconfig; do sleep 1; done"
capz-ips2qf-kubeconfig                 cluster.x-k8s.io/secret               1      0s
# Get kubeconfig and store it locally.
kubectl get secrets capz-ips2qf-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-ips2qf-control-plane-b7lgq   NotReady   control-plane,master   7s    v1.23.5
run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 3 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-ips2qf-control-plane-2f8k9 condition met
node/capz-ips2qf-control-plane-b7lgq condition met
... skipping 48 lines ...
+++ [0511 08:42:05] Building go targets for linux/amd64:
    vendor/github.com/onsi/ginkgo/ginkgo
> non-static build: k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo
make[1]: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
Conformance test: not doing test setup.
I0511 08:42:08.700806   90980 e2e.go:132] Starting e2e run "426f3155-7430-4b28-941b-d416db3ee2fa" on Ginkgo node 1
{"msg":"Test Suite starting","total":335,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1652258528 - Will randomize all specs
Will run 335 of 7044 specs

May 11 08:42:11.147: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
... skipping 42 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] IngressClass API
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:42:12.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-8299" for this suite.
•{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":335,"completed":1,"skipped":2,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Pods
... skipping 12 lines ...
May 11 08:42:14.994: INFO: The status of Pod pod-hostip-8364af73-44ec-4beb-ab11-eb33b98951c0 is Running (Ready = true)
May 11 08:42:15.063: INFO: Pod pod-hostip-8364af73-44ec-4beb-ab11-eb33b98951c0 has hostIP: 10.1.0.5
[AfterEach] [sig-node] Pods
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:42:15.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2222" for this suite.
•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":335,"completed":2,"skipped":19,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] ConfigMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating configMap configmap-3574/configmap-test-657a7584-3ee8-4c2b-9d76-e21e5e7a5035
STEP: Creating a pod to test consume configMaps
May 11 08:42:15.465: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f752f63-8d7c-4b65-882d-3a657456660c" in namespace "configmap-3574" to be "Succeeded or Failed"
May 11 08:42:15.499: INFO: Pod "pod-configmaps-6f752f63-8d7c-4b65-882d-3a657456660c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.739342ms
May 11 08:42:17.534: INFO: Pod "pod-configmaps-6f752f63-8d7c-4b65-882d-3a657456660c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069105515s
May 11 08:42:19.570: INFO: Pod "pod-configmaps-6f752f63-8d7c-4b65-882d-3a657456660c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105175015s
STEP: Saw pod success
May 11 08:42:19.570: INFO: Pod "pod-configmaps-6f752f63-8d7c-4b65-882d-3a657456660c" satisfied condition "Succeeded or Failed"
May 11 08:42:19.605: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod pod-configmaps-6f752f63-8d7c-4b65-882d-3a657456660c container env-test: <nil>
STEP: delete the pod
May 11 08:42:19.699: INFO: Waiting for pod pod-configmaps-6f752f63-8d7c-4b65-882d-3a657456660c to disappear
May 11 08:42:19.732: INFO: Pod pod-configmaps-6f752f63-8d7c-4b65-882d-3a657456660c no longer exists
[AfterEach] [sig-node] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:42:19.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3574" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":335,"completed":3,"skipped":43,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:42:20.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3086" for this suite.
[AfterEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":335,"completed":4,"skipped":53,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 11 08:42:20.190: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: creating the pod
May 11 08:42:20.430: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:42:23.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3970" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":335,"completed":5,"skipped":76,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] ConfigMap
... skipping 18 lines ...
STEP: Creating configMap with name cm-test-opt-create-ddd9accb-38cf-4e5c-975a-fb024f07b73d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:43:37.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9957" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":6,"skipped":132,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-network] Networking
... skipping 39 lines ...
May 11 08:44:01.027: INFO: reached 192.168.167.196 after 0/1 tries
May 11 08:44:01.027: INFO: Going to retry 0 out of 2 pods....
[AfterEach] [sig-network] Networking
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:44:01.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-51" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":335,"completed":7,"skipped":176,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test emptydir 0666 on node default medium
May 11 08:44:01.417: INFO: Waiting up to 5m0s for pod "pod-8a783fa5-0789-434a-a065-b4287909e3cd" in namespace "emptydir-6060" to be "Succeeded or Failed"
May 11 08:44:01.451: INFO: Pod "pod-8a783fa5-0789-434a-a065-b4287909e3cd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.081242ms
May 11 08:44:03.487: INFO: Pod "pod-8a783fa5-0789-434a-a065-b4287909e3cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.069729511s
STEP: Saw pod success
May 11 08:44:03.487: INFO: Pod "pod-8a783fa5-0789-434a-a065-b4287909e3cd" satisfied condition "Succeeded or Failed"
May 11 08:44:03.523: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod pod-8a783fa5-0789-434a-a065-b4287909e3cd container test-container: <nil>
STEP: delete the pod
May 11 08:44:03.617: INFO: Waiting for pod pod-8a783fa5-0789-434a-a065-b4287909e3cd to disappear
May 11 08:44:03.650: INFO: Pod pod-8a783fa5-0789-434a-a065-b4287909e3cd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:44:03.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6060" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":8,"skipped":191,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] Watchers
... skipping 14 lines ...
May 11 08:44:04.224: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2699  b59da49f-d538-4e2a-bccb-36e830182678 2962 0 2022-05-11 08:44:03 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2022-05-11 08:44:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 08:44:04.224: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2699  b59da49f-d538-4e2a-bccb-36e830182678 2966 0 2022-05-11 08:44:03 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2022-05-11 08:44:04 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:44:04.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2699" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":335,"completed":9,"skipped":215,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test downward API volume plugin
May 11 08:44:04.601: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9281c2ef-91c1-4fdd-9474-6a45a17842ca" in namespace "projected-739" to be "Succeeded or Failed"
May 11 08:44:04.636: INFO: Pod "downwardapi-volume-9281c2ef-91c1-4fdd-9474-6a45a17842ca": Phase="Pending", Reason="", readiness=false. Elapsed: 34.935484ms
May 11 08:44:06.691: INFO: Pod "downwardapi-volume-9281c2ef-91c1-4fdd-9474-6a45a17842ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.089764476s
STEP: Saw pod success
May 11 08:44:06.691: INFO: Pod "downwardapi-volume-9281c2ef-91c1-4fdd-9474-6a45a17842ca" satisfied condition "Succeeded or Failed"
May 11 08:44:06.725: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod downwardapi-volume-9281c2ef-91c1-4fdd-9474-6a45a17842ca container client-container: <nil>
STEP: delete the pod
May 11 08:44:06.818: INFO: Waiting for pod downwardapi-volume-9281c2ef-91c1-4fdd-9474-6a45a17842ca to disappear
May 11 08:44:06.851: INFO: Pod downwardapi-volume-9281c2ef-91c1-4fdd-9474-6a45a17842ca no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:44:06.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-739" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":335,"completed":10,"skipped":228,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-network] Services
... skipping 29 lines ...
[AfterEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:44:16.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8880" for this suite.
[AfterEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":335,"completed":11,"skipped":238,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] ReplicationController
... skipping 18 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:44:28.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3161" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":335,"completed":12,"skipped":254,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 24 lines ...
May 11 08:44:36.150: INFO: stderr: ""
May 11 08:44:36.150: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-5877-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:44:41.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8139" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":335,"completed":13,"skipped":255,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-network] Services
... skipping 46 lines ...
[AfterEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:44:55.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7897" for this suite.
[AfterEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":335,"completed":14,"skipped":390,"failed":0}
SSSSSSS
------------------------------
[sig-apps] CronJob 
  should support CronJob API operations [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] CronJob
... skipping 24 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-apps] CronJob
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:44:56.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-4328" for this suite.
•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":335,"completed":15,"skipped":397,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating configMap with name projected-configmap-test-volume-d55c57b1-6d75-48fa-955f-b7b669a41398
STEP: Creating a pod to test consume configMaps
May 11 08:44:57.286: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e7f2ccc-3165-415c-a7a5-0eaa35552c14" in namespace "projected-4118" to be "Succeeded or Failed"
May 11 08:44:57.325: INFO: Pod "pod-projected-configmaps-6e7f2ccc-3165-415c-a7a5-0eaa35552c14": Phase="Pending", Reason="", readiness=false. Elapsed: 38.701892ms
May 11 08:44:59.364: INFO: Pod "pod-projected-configmaps-6e7f2ccc-3165-415c-a7a5-0eaa35552c14": Phase="Running", Reason="", readiness=true. Elapsed: 2.077835215s
May 11 08:45:01.404: INFO: Pod "pod-projected-configmaps-6e7f2ccc-3165-415c-a7a5-0eaa35552c14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118096815s
STEP: Saw pod success
May 11 08:45:01.404: INFO: Pod "pod-projected-configmaps-6e7f2ccc-3165-415c-a7a5-0eaa35552c14" satisfied condition "Succeeded or Failed"
May 11 08:45:01.446: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod pod-projected-configmaps-6e7f2ccc-3165-415c-a7a5-0eaa35552c14 container agnhost-container: <nil>
STEP: delete the pod
May 11 08:45:01.552: INFO: Waiting for pod pod-projected-configmaps-6e7f2ccc-3165-415c-a7a5-0eaa35552c14 to disappear
May 11 08:45:01.590: INFO: Pod pod-projected-configmaps-6e7f2ccc-3165-415c-a7a5-0eaa35552c14 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:45:01.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4118" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":335,"completed":16,"skipped":450,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should list and delete a collection of ReplicaSets [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] ReplicaSet
... skipping 14 lines ...
STEP: DeleteCollection of the ReplicaSets
STEP: After DeleteCollection verify that ReplicaSets have been deleted
[AfterEach] [sig-apps] ReplicaSet
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:45:14.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4598" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":335,"completed":17,"skipped":474,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 11 08:45:14.738: INFO: Waiting up to 5m0s for pod "pod-71d7a9a7-ca15-4d5f-99f4-889a4073dff7" in namespace "emptydir-3060" to be "Succeeded or Failed"
May 11 08:45:14.773: INFO: Pod "pod-71d7a9a7-ca15-4d5f-99f4-889a4073dff7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.941996ms
May 11 08:45:16.813: INFO: Pod "pod-71d7a9a7-ca15-4d5f-99f4-889a4073dff7": Phase="Running", Reason="", readiness=true. Elapsed: 2.075266251s
May 11 08:45:18.852: INFO: Pod "pod-71d7a9a7-ca15-4d5f-99f4-889a4073dff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114284955s
STEP: Saw pod success
May 11 08:45:18.852: INFO: Pod "pod-71d7a9a7-ca15-4d5f-99f4-889a4073dff7" satisfied condition "Succeeded or Failed"
May 11 08:45:18.890: INFO: Trying to get logs from node capz-ips2qf-md-0-4h2wc pod pod-71d7a9a7-ca15-4d5f-99f4-889a4073dff7 container test-container: <nil>
STEP: delete the pod
May 11 08:45:19.009: INFO: Waiting for pod pod-71d7a9a7-ca15-4d5f-99f4-889a4073dff7 to disappear
May 11 08:45:19.045: INFO: Pod pod-71d7a9a7-ca15-4d5f-99f4-889a4073dff7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:45:19.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3060" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":18,"skipped":508,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
[BeforeEach] [sig-storage] Projected secret
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-586ad83b-ab98-447c-8567-be4cc3a615c2
STEP: Creating a pod to test consume secrets
May 11 08:45:19.627: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d1bcdd14-ffe0-4376-ad9e-f967297bd9fa" in namespace "projected-3609" to be "Succeeded or Failed"
May 11 08:45:19.666: INFO: Pod "pod-projected-secrets-d1bcdd14-ffe0-4376-ad9e-f967297bd9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 38.744845ms
May 11 08:45:21.704: INFO: Pod "pod-projected-secrets-d1bcdd14-ffe0-4376-ad9e-f967297bd9fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077107989s
STEP: Saw pod success
May 11 08:45:21.704: INFO: Pod "pod-projected-secrets-d1bcdd14-ffe0-4376-ad9e-f967297bd9fa" satisfied condition "Succeeded or Failed"
May 11 08:45:21.744: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod pod-projected-secrets-d1bcdd14-ffe0-4376-ad9e-f967297bd9fa container projected-secret-volume-test: <nil>
STEP: delete the pod
May 11 08:45:21.837: INFO: Waiting for pod pod-projected-secrets-d1bcdd14-ffe0-4376-ad9e-f967297bd9fa to disappear
May 11 08:45:21.873: INFO: Pod pod-projected-secrets-d1bcdd14-ffe0-4376-ad9e-f967297bd9fa no longer exists
[AfterEach] [sig-storage] Projected secret
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:45:21.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3609" for this suite.
STEP: Destroying namespace "secret-namespace-9897" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":335,"completed":19,"skipped":509,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] PrivilegedPod [NodeConformance] 
  should enable privileged commands [LinuxOnly]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49
[BeforeEach] [sig-node] PrivilegedPod [NodeConformance]
... skipping 24 lines ...
May 11 08:45:27.145: INFO: ExecWithOptions: Clientset creation
May 11 08:45:27.145: INFO: ExecWithOptions: execute(POST https://capz-ips2qf-6d06e991.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/e2e-privileged-pod-614/pods/privileged-pod/exec?command=ip&command=link&command=add&command=dummy1&command=type&command=dummy&container=not-privileged-container&container=not-privileged-container&stderr=true&stdout=true %!s(MISSING))
[AfterEach] [sig-node] PrivilegedPod [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:45:27.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-privileged-pod-614" for this suite.
•{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":335,"completed":20,"skipped":521,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
May 11 08:45:27.596: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May 11 08:45:28.400: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.May, 11, 8, 45, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.May, 11, 8, 45, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.May, 11, 8, 45, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.May, 11, 8, 45, 28, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 11 08:45:31.491: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:45:31.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7052" for this suite.
STEP: Destroying namespace "webhook-7052-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":335,"completed":21,"skipped":539,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test emptydir volume type on tmpfs
May 11 08:45:32.457: INFO: Waiting up to 5m0s for pod "pod-9d9713bc-f614-4c81-b0ec-34769f43054d" in namespace "emptydir-2266" to be "Succeeded or Failed"
May 11 08:45:32.497: INFO: Pod "pod-9d9713bc-f614-4c81-b0ec-34769f43054d": Phase="Pending", Reason="", readiness=false. Elapsed: 39.797803ms
May 11 08:45:34.535: INFO: Pod "pod-9d9713bc-f614-4c81-b0ec-34769f43054d": Phase="Running", Reason="", readiness=true. Elapsed: 2.078673873s
May 11 08:45:36.574: INFO: Pod "pod-9d9713bc-f614-4c81-b0ec-34769f43054d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117250688s
STEP: Saw pod success
May 11 08:45:36.574: INFO: Pod "pod-9d9713bc-f614-4c81-b0ec-34769f43054d" satisfied condition "Succeeded or Failed"
May 11 08:45:36.612: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod pod-9d9713bc-f614-4c81-b0ec-34769f43054d container test-container: <nil>
STEP: delete the pod
May 11 08:45:36.706: INFO: Waiting for pod pod-9d9713bc-f614-4c81-b0ec-34769f43054d to disappear
May 11 08:45:36.743: INFO: Pod pod-9d9713bc-f614-4c81-b0ec-34769f43054d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:45:36.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2266" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":22,"skipped":561,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should delete a collection of services [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-network] Services
... skipping 16 lines ...
[AfterEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:45:37.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1734" for this suite.
[AfterEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":335,"completed":23,"skipped":582,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
May 11 08:45:37.817: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl kubectl --server=https://capz-ips2qf-6d06e991.eastus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-979 proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:45:38.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-979" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":335,"completed":24,"skipped":605,"failed":0}
SS
------------------------------
[sig-node] Pods 
  should run through the lifecycle of Pods and PodStatus [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Pods
... skipping 34 lines ...
May 11 08:45:42.715: INFO: observed event type MODIFIED
May 11 08:45:42.728: INFO: observed event type MODIFIED
[AfterEach] [sig-node] Pods
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:45:42.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8691" for this suite.
•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":335,"completed":25,"skipped":607,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] Job
... skipping 13 lines ...
May 11 08:45:45.396: INFO: Terminating Job.batch foo pods took: 101.192634ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:46:17.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8935" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":335,"completed":26,"skipped":637,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Container Runtime
... skipping 3 lines ...
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 11 08:46:20.559: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:46:20.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5051" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":335,"completed":27,"skipped":646,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Pods Extended Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Pods Extended
... skipping 11 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [sig-node] Pods Extended
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:46:21.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3265" for this suite.
•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":335,"completed":28,"skipped":656,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test emptydir 0777 on node default medium
May 11 08:46:21.492: INFO: Waiting up to 5m0s for pod "pod-af13c7fc-2163-432c-b256-140c3c59d0c9" in namespace "emptydir-7269" to be "Succeeded or Failed"
May 11 08:46:21.529: INFO: Pod "pod-af13c7fc-2163-432c-b256-140c3c59d0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 37.687931ms
May 11 08:46:23.568: INFO: Pod "pod-af13c7fc-2163-432c-b256-140c3c59d0c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.076643918s
STEP: Saw pod success
May 11 08:46:23.568: INFO: Pod "pod-af13c7fc-2163-432c-b256-140c3c59d0c9" satisfied condition "Succeeded or Failed"
May 11 08:46:23.606: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod pod-af13c7fc-2163-432c-b256-140c3c59d0c9 container test-container: <nil>
STEP: delete the pod
May 11 08:46:23.706: INFO: Waiting for pod pod-af13c7fc-2163-432c-b256-140c3c59d0c9 to disappear
May 11 08:46:23.747: INFO: Pod pod-af13c7fc-2163-432c-b256-140c3c59d0c9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:46:23.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7269" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":29,"skipped":664,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Kubelet
... skipping 10 lines ...
May 11 08:46:24.175: INFO: The status of Pod busybox-scheduling-79c38c81-e694-4d54-9833-0cf105373d7a is Pending, waiting for it to be Running (with Ready = true)
May 11 08:46:26.213: INFO: The status of Pod busybox-scheduling-79c38c81-e694-4d54-9833-0cf105373d7a is Running (Ready = true)
[AfterEach] [sig-node] Kubelet
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:46:26.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5855" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":335,"completed":30,"skipped":687,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 11 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:46:26.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8435" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":335,"completed":31,"skipped":690,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 7 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
May 11 08:46:27.204: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:46:27.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3520" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":335,"completed":32,"skipped":704,"failed":0}
SS
------------------------------
[sig-node] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Docker Containers
... skipping 3 lines ...
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test override all
May 11 08:46:27.880: INFO: Waiting up to 5m0s for pod "client-containers-2d4f7e20-068b-4420-860d-0746d8b8158e" in namespace "containers-6731" to be "Succeeded or Failed"
May 11 08:46:27.916: INFO: Pod "client-containers-2d4f7e20-068b-4420-860d-0746d8b8158e": Phase="Pending", Reason="", readiness=false. Elapsed: 35.903375ms
May 11 08:46:29.954: INFO: Pod "client-containers-2d4f7e20-068b-4420-860d-0746d8b8158e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073706958s
STEP: Saw pod success
May 11 08:46:29.954: INFO: Pod "client-containers-2d4f7e20-068b-4420-860d-0746d8b8158e" satisfied condition "Succeeded or Failed"
May 11 08:46:29.992: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod client-containers-2d4f7e20-068b-4420-860d-0746d8b8158e container agnhost-container: <nil>
STEP: delete the pod
May 11 08:46:30.091: INFO: Waiting for pod client-containers-2d4f7e20-068b-4420-860d-0746d8b8158e to disappear
May 11 08:46:30.133: INFO: Pod client-containers-2d4f7e20-068b-4420-860d-0746d8b8158e no longer exists
[AfterEach] [sig-node] Docker Containers
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:46:30.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6731" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":335,"completed":33,"skipped":706,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Probing container
... skipping 13 lines ...
May 11 08:46:32.645: INFO: Initial restart count of pod liveness-d1e80ffe-01e6-4603-86fa-3687a8bc0808 is 0
STEP: deleting the pod
[AfterEach] [sig-node] Probing container
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:50:33.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1758" for this suite.
•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":335,"completed":34,"skipped":733,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  should run the lifecycle of a Deployment [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] Deployment
... skipping 87 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83
May 11 08:50:40.728: INFO: Log out all the ReplicaSets if there is no deployment created
[AfterEach] [sig-apps] Deployment
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:50:40.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-816" for this suite.
•{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":335,"completed":35,"skipped":778,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] StatefulSet
... skipping 13 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6584
STEP: Waiting until pod test-pod will start running in namespace statefulset-6584
STEP: Creating statefulset with conflicting port in namespace statefulset-6584
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6584
May 11 08:50:43.498: INFO: Observed stateful pod in namespace: statefulset-6584, name: ss-0, uid: 8fd31b93-c859-4df4-90d3-4603eabcbbcc, status phase: Pending. Waiting for statefulset controller to delete.
May 11 08:50:43.528: INFO: Observed stateful pod in namespace: statefulset-6584, name: ss-0, uid: 8fd31b93-c859-4df4-90d3-4603eabcbbcc, status phase: Failed. Waiting for statefulset controller to delete.
May 11 08:50:43.545: INFO: Observed stateful pod in namespace: statefulset-6584, name: ss-0, uid: 8fd31b93-c859-4df4-90d3-4603eabcbbcc, status phase: Failed. Waiting for statefulset controller to delete.
May 11 08:50:43.562: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6584
STEP: Removing pod with conflicting port in namespace statefulset-6584
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6584 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120
May 11 08:50:47.770: INFO: Deleting all statefulset in ns statefulset-6584
May 11 08:50:47.806: INFO: Scaling statefulset ss to 0
May 11 08:50:57.971: INFO: Waiting for statefulset status.replicas updated to 0
May 11 08:50:58.015: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:50:58.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6584" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":335,"completed":36,"skipped":780,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test emptydir volume type on node default medium
May 11 08:50:58.587: INFO: Waiting up to 5m0s for pod "pod-e3895a0f-24c0-4acb-b08d-a780cf7e15c3" in namespace "emptydir-6434" to be "Succeeded or Failed"
May 11 08:50:58.629: INFO: Pod "pod-e3895a0f-24c0-4acb-b08d-a780cf7e15c3": Phase="Pending", Reason="", readiness=false. Elapsed: 41.620119ms
May 11 08:51:00.668: INFO: Pod "pod-e3895a0f-24c0-4acb-b08d-a780cf7e15c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.080486404s
STEP: Saw pod success
May 11 08:51:00.668: INFO: Pod "pod-e3895a0f-24c0-4acb-b08d-a780cf7e15c3" satisfied condition "Succeeded or Failed"
May 11 08:51:00.710: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod pod-e3895a0f-24c0-4acb-b08d-a780cf7e15c3 container test-container: <nil>
STEP: delete the pod
May 11 08:51:00.821: INFO: Waiting for pod pod-e3895a0f-24c0-4acb-b08d-a780cf7e15c3 to disappear
May 11 08:51:00.857: INFO: Pod pod-e3895a0f-24c0-4acb-b08d-a780cf7e15c3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:51:00.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6434" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":37,"skipped":807,"failed":0}
S
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a volume subpath [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Variable Expansion
... skipping 3 lines ...
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test substitution in volume subpath
May 11 08:51:01.246: INFO: Waiting up to 5m0s for pod "var-expansion-6ee8dfc1-860d-49c0-bcfa-f92e808e16a8" in namespace "var-expansion-7851" to be "Succeeded or Failed"
May 11 08:51:01.282: INFO: Pod "var-expansion-6ee8dfc1-860d-49c0-bcfa-f92e808e16a8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.01207ms
May 11 08:51:03.320: INFO: Pod "var-expansion-6ee8dfc1-860d-49c0-bcfa-f92e808e16a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074207982s
STEP: Saw pod success
May 11 08:51:03.320: INFO: Pod "var-expansion-6ee8dfc1-860d-49c0-bcfa-f92e808e16a8" satisfied condition "Succeeded or Failed"
May 11 08:51:03.361: INFO: Trying to get logs from node capz-ips2qf-md-0-4h2wc pod var-expansion-6ee8dfc1-860d-49c0-bcfa-f92e808e16a8 container dapi-container: <nil>
STEP: delete the pod
May 11 08:51:03.484: INFO: Waiting for pod var-expansion-6ee8dfc1-860d-49c0-bcfa-f92e808e16a8 to disappear
May 11 08:51:03.522: INFO: Pod var-expansion-6ee8dfc1-860d-49c0-bcfa-f92e808e16a8 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:51:03.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7851" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":335,"completed":38,"skipped":808,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-cli] Kubectl client
... skipping 23 lines ...
May 11 08:51:08.092: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May 11 08:51:08.092: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-ips2qf-6d06e991.eastus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-6190 describe pod agnhost-primary-5pg5j'
May 11 08:51:08.392: INFO: stderr: ""
May 11 08:51:08.392: INFO: stdout: "Name:         agnhost-primary-5pg5j\nNamespace:    kubectl-6190\nPriority:     0\nNode:         capz-ips2qf-md-0-4h2wc/10.1.0.5\nStart Time:   Wed, 11 May 2022 08:51:05 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  cni.projectcalico.org/containerID: bb56f154d70efd59be1d08a9d42a0d1529f59a4031f46bc9371e442493fc9aff\n              cni.projectcalico.org/podIP: 192.168.226.210/32\n              cni.projectcalico.org/podIPs: 192.168.226.210/32\nStatus:       Running\nIP:           192.168.226.210\nIPs:\n  IP:           192.168.226.210\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://727ead105c839f34c83c5789d57a85f8256a9731d5b88ad3ed9fb962b9f9b099\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.33\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 11 May 2022 08:51:07 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-28blm (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-28blm:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned kubectl-6190/agnhost-primary-5pg5j to capz-ips2qf-md-0-4h2wc\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.33\" already present on machine\n  Normal  Created    2s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
May 11 08:51:08.393: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-ips2qf-6d06e991.eastus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-6190 describe rc agnhost-primary'
May 11 08:51:08.687: INFO: stderr: ""
May 11 08:51:08.687: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-6190\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.33\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-primary-5pg5j\n"
May 11 08:51:08.687: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-ips2qf-6d06e991.eastus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-6190 describe service agnhost-primary'
May 11 08:51:08.978: INFO: stderr: ""
May 11 08:51:08.978: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-6190\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.98.7.97\nIPs:               10.98.7.97\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.226.210:6379\nSession Affinity:  None\nEvents:            <none>\n"
May 11 08:51:09.030: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-ips2qf-6d06e991.eastus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-6190 describe node capz-ips2qf-control-plane-2f8k9'
May 11 08:51:09.405: INFO: stderr: ""
May 11 08:51:09.405: INFO: stdout: "Name:               capz-ips2qf-control-plane-2f8k9\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=Standard_D2s_v3\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=eastus2\n                    failure-domain.beta.kubernetes.io/zone=eastus2-3\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=capz-ips2qf-control-plane-2f8k9\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\n                    node.kubernetes.io/instance-type=Standard_D2s_v3\n                    topology.kubernetes.io/region=eastus2\n                    topology.kubernetes.io/zone=eastus2-3\nAnnotations:        cluster.x-k8s.io/cluster-name: capz-ips2qf\n                    cluster.x-k8s.io/cluster-namespace: default\n                    cluster.x-k8s.io/machine: capz-ips2qf-control-plane-mdkbd\n                    cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n                    cluster.x-k8s.io/owner-name: capz-ips2qf-control-plane\n                    kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    projectcalico.org/IPv4Address: 10.0.0.5/16\n                    projectcalico.org/IPv4VXLANTunnelAddr: 192.168.121.0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 11 May 2022 08:33:44 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  capz-ips2qf-control-plane-2f8k9\n  AcquireTime:     <unset>\n  RenewTime:       Wed, 11 May 2022 08:51:04 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Wed, 11 May 2022 08:34:30 +0000   Wed, 11 May 2022 08:34:30 +0000   CalicoIsUp                   Calico is running on this node\n  MemoryPressure       False   Wed, 11 May 2022 08:50:03 +0000   Wed, 11 May 2022 08:33:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 11 May 2022 08:50:03 +0000   Wed, 11 May 2022 08:33:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 11 May 2022 08:50:03 +0000   Wed, 11 May 2022 08:33:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 11 May 2022 08:50:03 +0000   Wed, 11 May 2022 08:34:23 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.0.0.5\n  Hostname:    capz-ips2qf-control-plane-2f8k9\nCapacity:\n  cpu:                2\n  ephemeral-storage:  129900528Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             8145332Ki\n  pods:               110\nAllocatable:\n  cpu:                2\n  ephemeral-storage:  119716326407\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             8042932Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 5745274211a04ca4b94257cc74c5789a\n  System UUID:                ffa9a138-8b18-1047-ba31-281be36fd01d\n  Boot ID:                    ca07f612-7aaa-4872-820f-da3156152ba6\n  Kernel Version:             5.13.0-1017-azure\n  OS Image:                   Ubuntu 20.04.4 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.1\n  Kubelet Version:            v1.23.5\n  Kube-Proxy Version:         v1.23.5\nPodCIDR:                      10.244.3.0/24\nPodCIDRs:                     10.244.3.0/24\nProviderID:                   azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ips2qf/providers/Microsoft.Compute/virtualMachines/capz-ips2qf-control-plane-2f8k9\nNon-terminated Pods:          (7 in total)\n  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---\n  kube-system                 calico-node-dwqwt                                          250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m\n  kube-system                 cloud-node-manager-bfw9s                                   50m (2%)      2 (100%)    50Mi (0%)        512Mi (6%)     17m\n  kube-system                 etcd-capz-ips2qf-control-plane-2f8k9                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m\n  kube-system                 kube-apiserver-capz-ips2qf-control-plane-2f8k9             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m\n  kube-system                 kube-controller-manager-capz-ips2qf-control-plane-2f8k9    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m\n  kube-system                 kube-proxy-dhgv2                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m\n  kube-system                 kube-scheduler-capz-ips2qf-control-plane-2f8k9             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                950m (47%)  2 (100%)\n  memory             150Mi (1%)  512Mi (6%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                   Age   From        Message\n  ----     ------                   ----  ----        -------\n  Normal   Starting                 16m   kube-proxy  \n  Warning  InvalidDiskCapacity      17m   kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory  17m   kubelet     Node capz-ips2qf-control-plane-2f8k9 status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    17m   kubelet     Node capz-ips2qf-control-plane-2f8k9 status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     17m   kubelet     Node capz-ips2qf-control-plane-2f8k9 status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  17m   kubelet     Updated Node Allocatable limit across pods\n  Normal   Starting                 17m   kubelet     Starting kubelet.\n  Normal   NodeReady                16m   kubelet     Node capz-ips2qf-control-plane-2f8k9 status is now: NodeReady\n"
May 11 08:51:09.405: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://capz-ips2qf-6d06e991.eastus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-6190 describe namespace kubectl-6190'
May 11 08:51:09.725: INFO: stderr: ""
May 11 08:51:09.725: INFO: stdout: "Name:         kubectl-6190\nLabels:       e2e-framework=kubectl\n              e2e-run=426f3155-7430-4b28-941b-d416db3ee2fa\n              kubernetes.io/metadata.name=kubectl-6190\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:51:09.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6190" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":335,"completed":39,"skipped":825,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should patch a secret [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Secrets
... skipping 11 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-node] Secrets
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:51:10.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1481" for this suite.
•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":335,"completed":40,"skipped":854,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] Watchers
... skipping 23 lines ...
May 11 08:51:21.180: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4590  8698a24a-782a-4954-bbd2-7060a7b5a635 5699 0 2022-05-11 08:51:10 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2022-05-11 08:51:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
May 11 08:51:21.180: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4590  8698a24a-782a-4954-bbd2-7060a7b5a635 5700 0 2022-05-11 08:51:10 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2022-05-11 08:51:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:51:21.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4590" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":335,"completed":41,"skipped":882,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
[BeforeEach] [sig-node] Security Context
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
May 11 08:51:21.572: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-00668703-cd58-4f75-bb9b-352da7976073" in namespace "security-context-test-4884" to be "Succeeded or Failed"
May 11 08:51:21.615: INFO: Pod "busybox-readonly-true-00668703-cd58-4f75-bb9b-352da7976073": Phase="Pending", Reason="", readiness=false. Elapsed: 43.02348ms
May 11 08:51:23.655: INFO: Pod "busybox-readonly-true-00668703-cd58-4f75-bb9b-352da7976073": Phase="Failed", Reason="", readiness=false. Elapsed: 2.083121421s
May 11 08:51:23.655: INFO: Pod "busybox-readonly-true-00668703-cd58-4f75-bb9b-352da7976073" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:51:23.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4884" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":335,"completed":42,"skipped":919,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:51:28.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2897" for this suite.
STEP: Destroying namespace "webhook-2897-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":335,"completed":43,"skipped":964,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test when running a container with a new image 
  should be able to pull from private registry with secret [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393
[BeforeEach] [sig-node] Container Runtime
... skipping 10 lines ...
STEP: check the container status
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:51:32.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2547" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":335,"completed":44,"skipped":1034,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-network] DNS
... skipping 19 lines ...
May 11 08:51:51.356: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:51.393: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:51.430: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:51.467: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:51.505: INFO: Unable to read jessie_udp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:51.542: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:51.542: INFO: Lookups using dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local wheezy_udp@dns-test-service-2.dns-260.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-260.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local jessie_udp@dns-test-service-2.dns-260.svc.cluster.local jessie_tcp@dns-test-service-2.dns-260.svc.cluster.local]

May 11 08:51:56.580: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:56.617: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:56.655: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:56.692: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:56.729: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:56.766: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:56.803: INFO: Unable to read jessie_udp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:56.840: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:51:56.840: INFO: Lookups using dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local wheezy_udp@dns-test-service-2.dns-260.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-260.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local jessie_udp@dns-test-service-2.dns-260.svc.cluster.local jessie_tcp@dns-test-service-2.dns-260.svc.cluster.local]

May 11 08:52:01.580: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:52:01.618: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:52:01.655: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:52:01.692: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:52:01.730: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:52:01.766: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:52:01.804: INFO: Unable to read jessie_udp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:52:01.846: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-260.svc.cluster.local from pod dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d: the server could not find the requested resource (get pods dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d)
May 11 08:52:01.846: INFO: Lookups using dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local wheezy_udp@dns-test-service-2.dns-260.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-260.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-260.svc.cluster.local jessie_udp@dns-test-service-2.dns-260.svc.cluster.local jessie_tcp@dns-test-service-2.dns-260.svc.cluster.local]

May 11 08:52:06.909: INFO: DNS probes using dns-260/dns-test-02c0ca9a-caaf-4b2d-887b-4e686e0d659d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:52:07.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-260" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":335,"completed":45,"skipped":1068,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-network] Services
... skipping 71 lines ...
[AfterEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:52:23.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1841" for this suite.
[AfterEach] [sig-network] Services
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":335,"completed":46,"skipped":1107,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:52:31.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6419" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":335,"completed":47,"skipped":1118,"failed":0}
SSS
------------------------------
[sig-node] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Security Context
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
May 11 08:52:31.772: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-476c5725-1cbe-43c3-a57c-cb6b915dda90" in namespace "security-context-test-9757" to be "Succeeded or Failed"
May 11 08:52:31.808: INFO: Pod "busybox-privileged-false-476c5725-1cbe-43c3-a57c-cb6b915dda90": Phase="Pending", Reason="", readiness=false. Elapsed: 36.08274ms
May 11 08:52:33.847: INFO: Pod "busybox-privileged-false-476c5725-1cbe-43c3-a57c-cb6b915dda90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0754172s
May 11 08:52:33.848: INFO: Pod "busybox-privileged-false-476c5725-1cbe-43c3-a57c-cb6b915dda90" satisfied condition "Succeeded or Failed"
May 11 08:52:33.895: INFO: Got logs for pod "busybox-privileged-false-476c5725-1cbe-43c3-a57c-cb6b915dda90": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:52:33.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9757" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":48,"skipped":1121,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:52:38.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6294" for this suite.
STEP: Destroying namespace "webhook-6294-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":335,"completed":49,"skipped":1132,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 17 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:52:52.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6795" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":335,"completed":50,"skipped":1133,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test when running a container with a new image 
  should be able to pull image [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382
[BeforeEach] [sig-node] Container Runtime
... skipping 9 lines ...
STEP: check the container status
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:52:55.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8147" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":335,"completed":51,"skipped":1163,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating configMap with name projected-configmap-test-volume-map-f4aa149f-e5dc-423a-a055-1fae0521a7d6
STEP: Creating a pod to test consume configMaps
May 11 08:52:55.954: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b0301ce-6d31-44b5-9706-d695511e2e3c" in namespace "projected-9564" to be "Succeeded or Failed"
May 11 08:52:55.999: INFO: Pod "pod-projected-configmaps-8b0301ce-6d31-44b5-9706-d695511e2e3c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.868593ms
May 11 08:52:58.038: INFO: Pod "pod-projected-configmaps-8b0301ce-6d31-44b5-9706-d695511e2e3c": Phase="Running", Reason="", readiness=true. Elapsed: 2.08400372s
May 11 08:53:00.078: INFO: Pod "pod-projected-configmaps-8b0301ce-6d31-44b5-9706-d695511e2e3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12355896s
STEP: Saw pod success
May 11 08:53:00.078: INFO: Pod "pod-projected-configmaps-8b0301ce-6d31-44b5-9706-d695511e2e3c" satisfied condition "Succeeded or Failed"
May 11 08:53:00.115: INFO: Trying to get logs from node capz-ips2qf-md-0-4h2wc pod pod-projected-configmaps-8b0301ce-6d31-44b5-9706-d695511e2e3c container agnhost-container: <nil>
STEP: delete the pod
May 11 08:53:00.224: INFO: Waiting for pod pod-projected-configmaps-8b0301ce-6d31-44b5-9706-d695511e2e3c to disappear
May 11 08:53:00.261: INFO: Pod pod-projected-configmaps-8b0301ce-6d31-44b5-9706-d695511e2e3c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:53:00.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9564" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":335,"completed":52,"skipped":1179,"failed":0}
S
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-scheduling] LimitRange
... skipping 32 lines ...
May 11 08:53:08.225: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:53:08.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-9177" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":335,"completed":53,"skipped":1180,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Projected secret
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating projection with secret that has name projected-secret-test-map-4dfcdbee-c9df-4bec-b3ad-d938bf79e342
STEP: Creating a pod to test consume secrets
May 11 08:53:08.697: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f2b57050-1f96-4240-9769-0c4a932733e4" in namespace "projected-3286" to be "Succeeded or Failed"
May 11 08:53:08.736: INFO: Pod "pod-projected-secrets-f2b57050-1f96-4240-9769-0c4a932733e4": Phase="Pending", Reason="", readiness=false. Elapsed: 39.100484ms
May 11 08:53:10.776: INFO: Pod "pod-projected-secrets-f2b57050-1f96-4240-9769-0c4a932733e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.078167473s
STEP: Saw pod success
May 11 08:53:10.776: INFO: Pod "pod-projected-secrets-f2b57050-1f96-4240-9769-0c4a932733e4" satisfied condition "Succeeded or Failed"
May 11 08:53:10.813: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod pod-projected-secrets-f2b57050-1f96-4240-9769-0c4a932733e4 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 11 08:53:10.910: INFO: Waiting for pod pod-projected-secrets-f2b57050-1f96-4240-9769-0c4a932733e4 to disappear
May 11 08:53:10.945: INFO: Pod pod-projected-secrets-f2b57050-1f96-4240-9769-0c4a932733e4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:53:10.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3286" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":335,"completed":54,"skipped":1230,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] ConfigMap
... skipping 10 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:53:13.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7500" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":55,"skipped":1239,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] version v1
... skipping 39 lines ...
May 11 08:53:16.683: INFO: Starting http.Client for https://capz-ips2qf-6d06e991.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/proxy-7011/services/test-service/proxy/some/path/with/PUT
May 11 08:53:16.721: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT
[AfterEach] version v1
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:53:16.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7011" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":335,"completed":56,"skipped":1278,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-network] DNS
... skipping 19 lines ...
May 11 08:53:39.444: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:39.482: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:39.671: INFO: Unable to read jessie_udp@dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:39.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:39.746: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:39.783: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:39.933: INFO: Lookups using dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818 failed for: [wheezy_udp@dns-test-service.dns-4637.svc.cluster.local wheezy_tcp@dns-test-service.dns-4637.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local jessie_udp@dns-test-service.dns-4637.svc.cluster.local jessie_tcp@dns-test-service.dns-4637.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local]

May 11 08:53:44.970: INFO: Unable to read wheezy_udp@dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:45.007: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:45.044: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:45.081: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:45.267: INFO: Unable to read jessie_udp@dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:45.304: INFO: Unable to read jessie_tcp@dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:45.343: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:45.380: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local from pod dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818: the server could not find the requested resource (get pods dns-test-9c7aaf81-623e-49ab-b770-92a660838818)
May 11 08:53:45.528: INFO: Lookups using dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818 failed for: [wheezy_udp@dns-test-service.dns-4637.svc.cluster.local wheezy_tcp@dns-test-service.dns-4637.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local jessie_udp@dns-test-service.dns-4637.svc.cluster.local jessie_tcp@dns-test-service.dns-4637.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4637.svc.cluster.local]

May 11 08:53:50.532: INFO: DNS probes using dns-4637/dns-test-9c7aaf81-623e-49ab-b770-92a660838818 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:53:50.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4637" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":335,"completed":57,"skipped":1312,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 22 lines ...
May 11 08:53:59.608: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [sig-node] Container Lifecycle Hook
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:53:59.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2130" for this suite.
•{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":335,"completed":58,"skipped":1349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Projected combined
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating configMap with name configmap-projected-all-test-volume-e3c12800-545a-4112-ae7f-ff21a53790c6
STEP: Creating secret with name secret-projected-all-test-volume-9a87125a-6492-4319-9dad-487e8bb4003f
STEP: Creating a pod to test Check all projections for projected volume plugin
May 11 08:54:00.119: INFO: Waiting up to 5m0s for pod "projected-volume-02139641-a543-47c9-adde-d7245294270e" in namespace "projected-9899" to be "Succeeded or Failed"
May 11 08:54:00.156: INFO: Pod "projected-volume-02139641-a543-47c9-adde-d7245294270e": Phase="Pending", Reason="", readiness=false. Elapsed: 36.714458ms
May 11 08:54:02.195: INFO: Pod "projected-volume-02139641-a543-47c9-adde-d7245294270e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075848662s
STEP: Saw pod success
May 11 08:54:02.195: INFO: Pod "projected-volume-02139641-a543-47c9-adde-d7245294270e" satisfied condition "Succeeded or Failed"
May 11 08:54:02.232: INFO: Trying to get logs from node capz-ips2qf-md-0-4h2wc pod projected-volume-02139641-a543-47c9-adde-d7245294270e container projected-all-volume-test: <nil>
STEP: delete the pod
May 11 08:54:02.338: INFO: Waiting for pod projected-volume-02139641-a543-47c9-adde-d7245294270e to disappear
May 11 08:54:02.375: INFO: Pod projected-volume-02139641-a543-47c9-adde-d7245294270e no longer exists
[AfterEach] [sig-storage] Projected combined
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:54:02.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9899" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":335,"completed":59,"skipped":1376,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 25 lines ...
May 11 08:54:03.772: INFO: created pod pod-service-account-nomountsa-nomountspec
May 11 08:54:03.772: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:54:03.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4081" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":335,"completed":60,"skipped":1399,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-cli] Kubectl client
... skipping 191 lines ...
May 11 08:54:15.062: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May 11 08:54:15.062: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:54:15.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6513" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":335,"completed":61,"skipped":1421,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating configMap with name projected-configmap-test-volume-map-12da489d-b7e2-47d3-984a-156ef7007eaf
STEP: Creating a pod to test consume configMaps
May 11 08:54:15.501: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7659f1ec-6591-4e29-a1db-56704f416b82" in namespace "projected-8033" to be "Succeeded or Failed"
May 11 08:54:15.545: INFO: Pod "pod-projected-configmaps-7659f1ec-6591-4e29-a1db-56704f416b82": Phase="Pending", Reason="", readiness=false. Elapsed: 43.784873ms
May 11 08:54:17.586: INFO: Pod "pod-projected-configmaps-7659f1ec-6591-4e29-a1db-56704f416b82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085022616s
STEP: Saw pod success
May 11 08:54:17.587: INFO: Pod "pod-projected-configmaps-7659f1ec-6591-4e29-a1db-56704f416b82" satisfied condition "Succeeded or Failed"
May 11 08:54:17.624: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod pod-projected-configmaps-7659f1ec-6591-4e29-a1db-56704f416b82 container agnhost-container: <nil>
STEP: delete the pod
May 11 08:54:17.720: INFO: Waiting for pod pod-projected-configmaps-7659f1ec-6591-4e29-a1db-56704f416b82 to disappear
May 11 08:54:17.757: INFO: Pod pod-projected-configmaps-7659f1ec-6591-4e29-a1db-56704f416b82 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:54:17.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8033" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":335,"completed":62,"skipped":1455,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 11 08:54:18.167: INFO: Waiting up to 5m0s for pod "pod-92fb0282-c60e-498b-b034-7e67162b6038" in namespace "emptydir-9441" to be "Succeeded or Failed"
May 11 08:54:18.203: INFO: Pod "pod-92fb0282-c60e-498b-b034-7e67162b6038": Phase="Pending", Reason="", readiness=false. Elapsed: 36.347929ms
May 11 08:54:20.240: INFO: Pod "pod-92fb0282-c60e-498b-b034-7e67162b6038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073623523s
STEP: Saw pod success
May 11 08:54:20.240: INFO: Pod "pod-92fb0282-c60e-498b-b034-7e67162b6038" satisfied condition "Succeeded or Failed"
May 11 08:54:20.288: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod pod-92fb0282-c60e-498b-b034-7e67162b6038 container test-container: <nil>
STEP: delete the pod
May 11 08:54:20.389: INFO: Waiting for pod pod-92fb0282-c60e-498b-b034-7e67162b6038 to disappear
May 11 08:54:20.425: INFO: Pod pod-92fb0282-c60e-498b-b034-7e67162b6038 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:54:20.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9441" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":63,"skipped":1468,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Security Context
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
May 11 08:54:20.822: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-75593aaf-8315-477b-98d5-8acc7779ca79" in namespace "security-context-test-4816" to be "Succeeded or Failed"
May 11 08:54:20.858: INFO: Pod "busybox-readonly-false-75593aaf-8315-477b-98d5-8acc7779ca79": Phase="Pending", Reason="", readiness=false. Elapsed: 35.90163ms
May 11 08:54:22.896: INFO: Pod "busybox-readonly-false-75593aaf-8315-477b-98d5-8acc7779ca79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073691613s
May 11 08:54:22.896: INFO: Pod "busybox-readonly-false-75593aaf-8315-477b-98d5-8acc7779ca79" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:54:22.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4816" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":335,"completed":64,"skipped":1496,"failed":0}
S
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Probing container
... skipping 13 lines ...
May 11 08:54:25.409: INFO: Initial restart count of pod busybox-32d0185b-ea09-486d-b760-e56ee60b121d is 0
STEP: deleting the pod
[AfterEach] [sig-node] Probing container
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:58:26.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1282" for this suite.
•{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":335,"completed":65,"skipped":1497,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test downward API volume plugin
May 11 08:58:26.476: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24c29707-e975-4841-8bb1-67d7333e7eaa" in namespace "downward-api-938" to be "Succeeded or Failed"
May 11 08:58:26.515: INFO: Pod "downwardapi-volume-24c29707-e975-4841-8bb1-67d7333e7eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 39.01448ms
May 11 08:58:28.553: INFO: Pod "downwardapi-volume-24c29707-e975-4841-8bb1-67d7333e7eaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077283162s
STEP: Saw pod success
May 11 08:58:28.553: INFO: Pod "downwardapi-volume-24c29707-e975-4841-8bb1-67d7333e7eaa" satisfied condition "Succeeded or Failed"
May 11 08:58:28.591: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod downwardapi-volume-24c29707-e975-4841-8bb1-67d7333e7eaa container client-container: <nil>
STEP: delete the pod
May 11 08:58:28.737: INFO: Waiting for pod downwardapi-volume-24c29707-e975-4841-8bb1-67d7333e7eaa to disappear
May 11 08:58:28.774: INFO: Pod downwardapi-volume-24c29707-e975-4841-8bb1-67d7333e7eaa no longer exists
[AfterEach] [sig-storage] Downward API volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:58:28.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-938" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":335,"completed":66,"skipped":1520,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-network] Service endpoints latency
... skipping 418 lines ...
May 11 08:58:39.966: INFO: 99 %ile: 889.991036ms
May 11 08:58:39.966: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:58:39.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-5165" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":335,"completed":67,"skipped":1528,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should validate Statefulset Status endpoints [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] StatefulSet
... skipping 34 lines ...
May 11 08:59:01.362: INFO: Waiting for statefulset status.replicas updated to 0
May 11 08:59:01.398: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:01.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4374" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":335,"completed":68,"skipped":1532,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Downward API
... skipping 3 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test downward api env vars
May 11 08:59:01.957: INFO: Waiting up to 5m0s for pod "downward-api-126f413e-14ed-465f-838c-9a8a0270b1b0" in namespace "downward-api-5841" to be "Succeeded or Failed"
May 11 08:59:01.993: INFO: Pod "downward-api-126f413e-14ed-465f-838c-9a8a0270b1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.235144ms
May 11 08:59:04.030: INFO: Pod "downward-api-126f413e-14ed-465f-838c-9a8a0270b1b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.073431567s
STEP: Saw pod success
May 11 08:59:04.030: INFO: Pod "downward-api-126f413e-14ed-465f-838c-9a8a0270b1b0" satisfied condition "Succeeded or Failed"
May 11 08:59:04.068: INFO: Trying to get logs from node capz-ips2qf-md-0-4h2wc pod downward-api-126f413e-14ed-465f-838c-9a8a0270b1b0 container dapi-container: <nil>
STEP: delete the pod
May 11 08:59:04.191: INFO: Waiting for pod downward-api-126f413e-14ed-465f-838c-9a8a0270b1b0 to disappear
May 11 08:59:04.228: INFO: Pod downward-api-126f413e-14ed-465f-838c-9a8a0270b1b0 no longer exists
[AfterEach] [sig-node] Downward API
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:04.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5841" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":335,"completed":69,"skipped":1563,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-cli] Kubectl client
... skipping 52 lines ...
May 11 08:59:12.659: INFO: stderr: ""
May 11 08:59:12.659: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:12.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6440" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":335,"completed":70,"skipped":1568,"failed":0}

------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should list, patch and delete a collection of StatefulSets [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] StatefulSet
... skipping 24 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120
May 11 08:59:33.610: INFO: Deleting all statefulset in ns statefulset-8943
[AfterEach] [sig-apps] StatefulSet
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:33.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8943" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":335,"completed":71,"skipped":1568,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
May 11 08:59:34.067: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl kubectl --server=https://capz-ips2qf-6d06e991.eastus2.cloudapp.azure.com:6443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig --namespace=kubectl-7129 proxy --unix-socket=/tmp/kubectl-proxy-unix1414910441/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:34.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7129" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":335,"completed":72,"skipped":1586,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test downward API volume plugin
May 11 08:59:34.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8ba791a-5e49-4511-b291-0533bdd5990e" in namespace "projected-8747" to be "Succeeded or Failed"
May 11 08:59:34.567: INFO: Pod "downwardapi-volume-d8ba791a-5e49-4511-b291-0533bdd5990e": Phase="Pending", Reason="", readiness=false. Elapsed: 38.867009ms
May 11 08:59:36.605: INFO: Pod "downwardapi-volume-d8ba791a-5e49-4511-b291-0533bdd5990e": Phase="Running", Reason="", readiness=true. Elapsed: 2.076903359s
May 11 08:59:38.643: INFO: Pod "downwardapi-volume-d8ba791a-5e49-4511-b291-0533bdd5990e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11469274s
STEP: Saw pod success
May 11 08:59:38.643: INFO: Pod "downwardapi-volume-d8ba791a-5e49-4511-b291-0533bdd5990e" satisfied condition "Succeeded or Failed"
May 11 08:59:38.680: INFO: Trying to get logs from node capz-ips2qf-md-0-n6nwl pod downwardapi-volume-d8ba791a-5e49-4511-b291-0533bdd5990e container client-container: <nil>
STEP: delete the pod
May 11 08:59:38.781: INFO: Waiting for pod downwardapi-volume-d8ba791a-5e49-4511-b291-0533bdd5990e to disappear
May 11 08:59:38.818: INFO: Pod downwardapi-volume-d8ba791a-5e49-4511-b291-0533bdd5990e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:38.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8747" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":73,"skipped":1591,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] server version 
  should find the server version [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] server version
... skipping 12 lines ...
May 11 08:59:39.198: INFO: cleanMinorVersion: 23
May 11 08:59:39.198: INFO: Minor version: 23
[AfterEach] [sig-api-machinery] server version
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:39.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-7655" for this suite.
•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":335,"completed":74,"skipped":1624,"failed":0}
S
------------------------------
[sig-apps] DisruptionController 
  should observe PodDisruptionBudget status updated [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] DisruptionController
... skipping 11 lines ...
STEP: Waiting for all pods to be running
May 11 08:59:39.804: INFO: running pods: 0 < 3
[AfterEach] [sig-apps] DisruptionController
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:41.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1098" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":335,"completed":75,"skipped":1625,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test downward API volume plugin
May 11 08:59:42.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aecd2d0a-7f3f-4b21-b19b-50463addbb53" in namespace "downward-api-8366" to be "Succeeded or Failed"
May 11 08:59:42.299: INFO: Pod "downwardapi-volume-aecd2d0a-7f3f-4b21-b19b-50463addbb53": Phase="Pending", Reason="", readiness=false. Elapsed: 38.012166ms
May 11 08:59:44.337: INFO: Pod "downwardapi-volume-aecd2d0a-7f3f-4b21-b19b-50463addbb53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.075974323s
STEP: Saw pod success
May 11 08:59:44.337: INFO: Pod "downwardapi-volume-aecd2d0a-7f3f-4b21-b19b-50463addbb53" satisfied condition "Succeeded or Failed"
May 11 08:59:44.376: INFO: Trying to get logs from node capz-ips2qf-md-0-4h2wc pod downwardapi-volume-aecd2d0a-7f3f-4b21-b19b-50463addbb53 container client-container: <nil>
STEP: delete the pod
May 11 08:59:44.470: INFO: Waiting for pod downwardapi-volume-aecd2d0a-7f3f-4b21-b19b-50463addbb53 to disappear
May 11 08:59:44.509: INFO: Pod downwardapi-volume-aecd2d0a-7f3f-4b21-b19b-50463addbb53 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:44.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8366" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":335,"completed":76,"skipped":1629,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Discovery 
  should validate PreferredVersion for each APIGroup [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] Discovery
... skipping 89 lines ...
May 11 08:59:45.996: INFO: Versions found [{crd.projectcalico.org/v1 v1}]
May 11 08:59:45.996: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1
[AfterEach] [sig-api-machinery] Discovery
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:45.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-2592" for this suite.
•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":335,"completed":77,"skipped":1636,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Secrets
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating secret with name secret-test-6bd5e403-592e-4bdc-b449-80e78c4912e2
STEP: Creating a pod to test consume secrets
May 11 08:59:46.421: INFO: Waiting up to 5m0s for pod "pod-secrets-34bcef35-7be0-48d5-8e14-d61c6256e59d" in namespace "secrets-9290" to be "Succeeded or Failed"
May 11 08:59:46.460: INFO: Pod "pod-secrets-34bcef35-7be0-48d5-8e14-d61c6256e59d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.775871ms
May 11 08:59:48.508: INFO: Pod "pod-secrets-34bcef35-7be0-48d5-8e14-d61c6256e59d": Phase="Running", Reason="", readiness=true. Elapsed: 2.086970147s
May 11 08:59:50.558: INFO: Pod "pod-secrets-34bcef35-7be0-48d5-8e14-d61c6256e59d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13633722s
STEP: Saw pod success
May 11 08:59:50.558: INFO: Pod "pod-secrets-34bcef35-7be0-48d5-8e14-d61c6256e59d" satisfied condition "Succeeded or Failed"
May 11 08:59:50.595: INFO: Trying to get logs from node capz-ips2qf-md-0-4h2wc pod pod-secrets-34bcef35-7be0-48d5-8e14-d61c6256e59d container secret-volume-test: <nil>
STEP: delete the pod
May 11 08:59:50.721: INFO: Waiting for pod pod-secrets-34bcef35-7be0-48d5-8e14-d61c6256e59d to disappear
May 11 08:59:50.761: INFO: Pod pod-secrets-34bcef35-7be0-48d5-8e14-d61c6256e59d no longer exists
[AfterEach] [sig-storage] Secrets
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 08:59:50.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9290" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":335,"completed":78,"skipped":1645,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Subpath
... skipping 7 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating pod pod-subpath-test-downwardapi-gcj9
STEP: Creating a pod to test atomic-volume-subpath
May 11 08:59:51.273: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gcj9" in namespace "subpath-3690" to be "Succeeded or Failed"
May 11 08:59:51.337: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Pending", Reason="", readiness=false. Elapsed: 64.710883ms
May 11 08:59:53.375: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102296517s
May 11 08:59:55.413: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Running", Reason="", readiness=true. Elapsed: 4.140239288s
May 11 08:59:57.451: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Running", Reason="", readiness=true. Elapsed: 6.178507073s
May 11 08:59:59.489: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Running", Reason="", readiness=true. Elapsed: 8.216483433s
May 11 09:00:01.528: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Running", Reason="", readiness=true. Elapsed: 10.255465216s
... skipping 2 lines ...
May 11 09:00:07.644: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Running", Reason="", readiness=true. Elapsed: 16.371605116s
May 11 09:00:09.682: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Running", Reason="", readiness=true. Elapsed: 18.409506806s
May 11 09:00:11.722: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Running", Reason="", readiness=true. Elapsed: 20.449169051s
May 11 09:00:13.761: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Running", Reason="", readiness=true. Elapsed: 22.48791109s
May 11 09:00:15.800: INFO: Pod "pod-subpath-test-downwardapi-gcj9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.527086828s
STEP: Saw pod success
May 11 09:00:15.800: INFO: Pod "pod-subpath-test-downwardapi-gcj9" satisfied condition "Succeeded or Failed"
May 11 09:00:15.837: INFO: Trying to get logs from node capz-ips2qf-md-0-4h2wc pod pod-subpath-test-downwardapi-gcj9 container test-container-subpath-downwardapi-gcj9: <nil>
STEP: delete the pod
May 11 09:00:15.938: INFO: Waiting for pod pod-subpath-test-downwardapi-gcj9 to disappear
May 11 09:00:15.974: INFO: Pod pod-subpath-test-downwardapi-gcj9 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-gcj9
May 11 09:00:15.974: INFO: Deleting pod "pod-subpath-test-downwardapi-gcj9" in namespace "subpath-3690"
[AfterEach] [sig-storage] Subpath
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:00:16.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3690" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":335,"completed":79,"skipped":1646,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-cli] Kubectl client
... skipping 29 lines ...
May 11 09:00:21.129: INFO: Selector matched 1 pods for map[app:agnhost]
May 11 09:00:21.129: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:00:21.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3383" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":335,"completed":80,"skipped":1651,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 11 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:00:23.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9462" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":335,"completed":81,"skipped":1679,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 9 lines ...
May 11 09:00:24.364: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
May 11 09:00:29.473: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:00:48.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3693" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":335,"completed":82,"skipped":1692,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
May 11 09:01:07.678: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
May 11 09:01:12.164: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:01:32.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2216" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":335,"completed":83,"skipped":1735,"failed":0}

------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-network] DNS
... skipping 21 lines ...
May 11 09:01:34.942: INFO: ExecWithOptions: execute(POST https://capz-ips2qf-6d06e991.eastus2.cloudapp.azure.com:6443/api/v1/namespaces/dns-6206/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING))
May 11 09:01:35.235: INFO: Deleting pod test-dns-nameservers...
[AfterEach] [sig-network] DNS
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:01:35.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6206" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":335,"completed":84,"skipped":1735,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
STEP: Creating a pod to test downward API volume plugin
May 11 09:01:35.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc337a4f-938d-4e7f-8893-9febd8605771" in namespace "downward-api-771" to be "Succeeded or Failed"
May 11 09:01:35.677: INFO: Pod "downwardapi-volume-dc337a4f-938d-4e7f-8893-9febd8605771": Phase="Pending", Reason="", readiness=false. Elapsed: 35.670941ms
May 11 09:01:37.713: INFO: Pod "downwardapi-volume-dc337a4f-938d-4e7f-8893-9febd8605771": Phase="Running", Reason="", readiness=true. Elapsed: 2.071636492s
May 11 09:01:39.748: INFO: Pod "downwardapi-volume-dc337a4f-938d-4e7f-8893-9febd8605771": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107010618s
STEP: Saw pod success
May 11 09:01:39.748: INFO: Pod "downwardapi-volume-dc337a4f-938d-4e7f-8893-9febd8605771" satisfied condition "Succeeded or Failed"
May 11 09:01:39.786: INFO: Trying to get logs from node capz-ips2qf-md-0-4h2wc pod downwardapi-volume-dc337a4f-938d-4e7f-8893-9febd8605771 container client-container: <nil>
STEP: delete the pod
May 11 09:01:39.886: INFO: Waiting for pod downwardapi-volume-dc337a4f-938d-4e7f-8893-9febd8605771 to disappear
May 11 09:01:39.919: INFO: Pod downwardapi-volume-dc337a4f-938d-4e7f-8893-9febd8605771 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:01:39.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-771" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":335,"completed":85,"skipped":1739,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 10 lines ...
STEP: creating the pod
May 11 09:01:40.250: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:01:43.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4216" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":335,"completed":86,"skipped":1852,"failed":0}

------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] ReplicationController
... skipping 14 lines ...
May 11 09:01:45.443: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:01:45.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8807" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":335,"completed":87,"skipped":1852,"failed":0}
S
------------------------------
[sig-auth] ServiceAccounts 
  should guarantee kube-root-ca.crt exist in any namespace [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 13 lines ...
STEP: waiting for the root ca configmap reconciled
May 11 09:01:46.988: INFO: Reconciled root ca configmap in namespace "svcaccounts-592"
[AfterEach] [sig-auth] ServiceAccounts
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:01:46.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-592" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":335,"completed":88,"skipped":1853,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  should validate Deployment Status endpoints [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] Deployment
... skipping 63 lines ...
May 11 09:01:49.884: INFO: Pod "test-deployment-qsr2m-764bc7c4b7-gv7pd" is available:
&Pod{ObjectMeta:{test-deployment-qsr2m-764bc7c4b7-gv7pd test-deployment-qsr2m-764bc7c4b7- deployment-6811  ffef9fe8-1a45-43f8-afed-07fd3e5bed5a 11626 0 2022-05-11 09:01:47 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[cni.projectcalico.org/containerID:1080b5e92b662097f1e4acc12c10da66165966a8bf2c54fb056365f48eeda524 cni.projectcalico.org/podIP:192.168.226.240/32 cni.projectcalico.org/podIPs:192.168.226.240/32] [{apps/v1 ReplicaSet test-deployment-qsr2m-764bc7c4b7 46ac2545-88ee-41e1-8350-225ca76c21d0 0xc004910f40 0xc004910f41}] []  [{calico Update v1 2022-05-11 09:01:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2022-05-11 09:01:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"46ac2545-88ee-41e1-8350-225ca76c21d0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-05-11 09:01:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.226.240\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fvcfk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fvcfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-ips2qf-md-0-4h2wc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-11 09:01:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-11 09:01:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-11 09:01:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-11 09:01:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.226.240,StartTime:2022-05-11 09:01:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-11 09:01:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://14c832e4c9e018bf11b75e231607b2247b674dfa95d73384e1131614cebb91cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.226.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:01:49.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6811" for this suite.
•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":335,"completed":89,"skipped":1867,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-storage] Projected configMap
... skipping 15 lines ...
STEP: Creating configMap with name cm-test-opt-create-e43054c4-bf22-45ef-ae2e-108ec16b4db8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:01:54.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-917" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":335,"completed":90,"skipped":1948,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces 
  should list and delete a collection of PodDisruptionBudgets [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] DisruptionController
... skipping 26 lines ...
May 11 09:01:55.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-2-9343" for this suite.
[AfterEach] [sig-apps] DisruptionController
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:01:55.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-2402" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":335,"completed":91,"skipped":1958,"failed":0}
SSSS
------------------------------
[sig-node] ConfigMap 
  should run through a ConfigMap lifecycle [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] ConfigMap
... skipping 12 lines ...
STEP: deleting the ConfigMap by collection with a label selector
STEP: listing all ConfigMaps in test namespace
[AfterEach] [sig-node] ConfigMap
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:01:56.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1680" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":335,"completed":92,"skipped":1962,"failed":0}

------------------------------
[sig-node] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-node] Pods
... skipping 6 lines ...
[BeforeEach] [sig-node] Pods
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
May 11 09:01:56.841: INFO: The status of Pod server-envvars-4df4c15e-7a96-404f-9314-6e1115ef3fee is Pending, waiting for it to be Running (with Ready = true)
May 11 09:01:58.878: INFO: The status of Pod server-envvars-4df4c15e-7a96-404f-9314-6e1115ef3fee is Running (Ready = true)
May 11 09:01:59.003: INFO: Waiting up to 5m0s for pod "client-envvars-72404bef-93a6-4959-854d-fea8db36f580" in namespace "pods-6865" to be "Succeeded or Failed"
May 11 09:01:59.047: INFO: Pod "client-envvars-72404bef-93a6-4959-854d-fea8db36f580": Phase="Pending", Reason="", readiness=false. Elapsed: 43.04953ms
May 11 09:02:01.097: INFO: Pod "client-envvars-72404bef-93a6-4959-854d-fea8db36f580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.093482912s
STEP: Saw pod success
May 11 09:02:01.097: INFO: Pod "client-envvars-72404bef-93a6-4959-854d-fea8db36f580" satisfied condition "Succeeded or Failed"
May 11 09:02:01.131: INFO: Trying to get logs from node capz-ips2qf-md-0-4h2wc pod client-envvars-72404bef-93a6-4959-854d-fea8db36f580 container env3cont: <nil>
STEP: delete the pod
May 11 09:02:01.232: INFO: Waiting for pod client-envvars-72404bef-93a6-4959-854d-fea8db36f580 to disappear
May 11 09:02:01.267: INFO: Pod client-envvars-72404bef-93a6-4959-854d-fea8db36f580 no longer exists
[AfterEach] [sig-node] Pods
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
May 11 09:02:01.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6865" for this suite.
•{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":335,"completed":93,"skipped":1962,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633
[BeforeEach] [sig-apps] StatefulSet
... skipping 26 lines ...