This job view page is being replaced by Spyglass soon. Check out the new job view.
PRlzhecheng: [WIP][NOT FOR REVIEW] Improve test stability
ResultABORTED
Tests 0 failed / 0 succeeded
Started2022-07-07 07:32
Elapsed44m37s
Revision01c388a25f9bb92a134d7322c75bc76fc724875e
Refs 968

No Test Failures!


Error lines from build-log.txt

... skipping 134 lines ...
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
/home/prow/go/src/sigs.k8s.io/cloud-provider-azure /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure
Image Tag is 1b1ed26
Error response from daemon: manifest for capzci.azurecr.io/azure-cloud-controller-manager:1b1ed26 not found: manifest unknown: manifest tagged by "1b1ed26" is not found
Build Linux Azure amd64 cloud controller manager
make: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure'
make ARCH=amd64 build-ccm-image
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cloud-provider-azure'
docker buildx inspect img-builder > /dev/null 2>&1 || docker buildx create --name img-builder --use
img-builder
... skipping 1110 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Deploy CAPI
curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.4/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f -
namespace/capi-system created
customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created
... skipping 196 lines ...
make[1]: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
+++ [0707 07:58:46] Building go targets for linux/amd64
    github.com/onsi/ginkgo/ginkgo (non-static)
make[1]: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
Conformance test: not doing test setup.
I0707 07:58:50.314156   93551 e2e.go:129] Starting e2e run "4ca30b08-fd20-411b-b516-e97bb3095a24" on Ginkgo node 1
{"msg":"Test Suite starting","total":349,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1657180730 - Will randomize all specs
Will run 349 of 7047 specs

Jul  7 07:58:52.150: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
... skipping 26 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jul  7 07:58:54.517: INFO: Waiting up to 5m0s for pod "downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907" in namespace "downward-api-3392" to be "Succeeded or Failed"
Jul  7 07:58:54.626: INFO: Pod "downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907": Phase="Pending", Reason="", readiness=false. Elapsed: 108.22865ms
Jul  7 07:58:56.740: INFO: Pod "downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222521291s
Jul  7 07:58:58.740: INFO: Pod "downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222784409s
Jul  7 07:59:00.740: INFO: Pod "downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907": Phase="Pending", Reason="", readiness=false. Elapsed: 6.222527838s
Jul  7 07:59:02.739: INFO: Pod "downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907": Phase="Running", Reason="", readiness=false. Elapsed: 8.221657184s
Jul  7 07:59:04.740: INFO: Pod "downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.222760871s
STEP: Saw pod success
Jul  7 07:59:04.741: INFO: Pod "downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907" satisfied condition "Succeeded or Failed"
Jul  7 07:59:04.855: INFO: Trying to get logs from node capz-0jrudd-md-0-98q89 pod downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907 container client-container: <nil>
STEP: delete the pod
Jul  7 07:59:05.097: INFO: Waiting for pod downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907 to disappear
Jul  7 07:59:05.206: INFO: Pod downwardapi-volume-170ede7b-236f-4062-90f5-29b24fd9f907 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
Jul  7 07:59:05.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3392" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":349,"completed":1,"skipped":9,"failed":0}

------------------------------
[sig-node] Container Runtime blackbox test when running a container with a new image 
  should not be able to pull image from invalid registry [NodeConformance]
  test/e2e/common/node/runtime.go:370
[BeforeEach] [sig-node] Container Runtime
... skipping 9 lines ...
STEP: check the container status
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:187
Jul  7 07:59:08.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5759" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":349,"completed":2,"skipped":9,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 37 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:187
Jul  7 07:59:21.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4387" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":349,"completed":3,"skipped":11,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:187
Jul  7 07:59:33.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7101" for this suite.
STEP: Destroying namespace "webhook-7101-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":349,"completed":4,"skipped":11,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-69ced32f-54c8-4d9f-abea-a28c219928bb
STEP: Creating a pod to test consume configMaps
Jul  7 07:59:35.475: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6fe5ced-6aff-4234-862e-f6663ede755e" in namespace "projected-1049" to be "Succeeded or Failed"
Jul  7 07:59:35.584: INFO: Pod "pod-projected-configmaps-f6fe5ced-6aff-4234-862e-f6663ede755e": Phase="Pending", Reason="", readiness=false. Elapsed: 108.28208ms
Jul  7 07:59:37.695: INFO: Pod "pod-projected-configmaps-f6fe5ced-6aff-4234-862e-f6663ede755e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219607533s
Jul  7 07:59:39.695: INFO: Pod "pod-projected-configmaps-f6fe5ced-6aff-4234-862e-f6663ede755e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.219708611s
STEP: Saw pod success
Jul  7 07:59:39.695: INFO: Pod "pod-projected-configmaps-f6fe5ced-6aff-4234-862e-f6663ede755e" satisfied condition "Succeeded or Failed"
Jul  7 07:59:39.805: INFO: Trying to get logs from node capz-0jrudd-md-0-98q89 pod pod-projected-configmaps-f6fe5ced-6aff-4234-862e-f6663ede755e container agnhost-container: <nil>
STEP: delete the pod
Jul  7 07:59:40.031: INFO: Waiting for pod pod-projected-configmaps-f6fe5ced-6aff-4234-862e-f6663ede755e to disappear
Jul  7 07:59:40.140: INFO: Pod pod-projected-configmaps-f6fe5ced-6aff-4234-862e-f6663ede755e no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
Jul  7 07:59:40.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1049" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":349,"completed":5,"skipped":14,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:187
Jul  7 07:59:47.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5560" for this suite.
STEP: Destroying namespace "webhook-5560-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":349,"completed":6,"skipped":47,"failed":0}

------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Downward API
... skipping 3 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jul  7 07:59:49.232: INFO: Waiting up to 5m0s for pod "downward-api-8afb963f-ccd0-4e5e-915a-89d18aa4aefa" in namespace "downward-api-432" to be "Succeeded or Failed"
Jul  7 07:59:49.341: INFO: Pod "downward-api-8afb963f-ccd0-4e5e-915a-89d18aa4aefa": Phase="Pending", Reason="", readiness=false. Elapsed: 108.785505ms
Jul  7 07:59:51.452: INFO: Pod "downward-api-8afb963f-ccd0-4e5e-915a-89d18aa4aefa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220151116s
Jul  7 07:59:53.452: INFO: Pod "downward-api-8afb963f-ccd0-4e5e-915a-89d18aa4aefa": Phase="Running", Reason="", readiness=false. Elapsed: 4.219568397s
Jul  7 07:59:55.453: INFO: Pod "downward-api-8afb963f-ccd0-4e5e-915a-89d18aa4aefa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.220826953s
STEP: Saw pod success
Jul  7 07:59:55.453: INFO: Pod "downward-api-8afb963f-ccd0-4e5e-915a-89d18aa4aefa" satisfied condition "Succeeded or Failed"
Jul  7 07:59:55.564: INFO: Trying to get logs from node capz-0jrudd-md-0-98q89 pod downward-api-8afb963f-ccd0-4e5e-915a-89d18aa4aefa container dapi-container: <nil>
STEP: delete the pod
Jul  7 07:59:55.793: INFO: Waiting for pod downward-api-8afb963f-ccd0-4e5e-915a-89d18aa4aefa to disappear
Jul  7 07:59:55.902: INFO: Pod downward-api-8afb963f-ccd0-4e5e-915a-89d18aa4aefa no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
Jul  7 07:59:55.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-432" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":349,"completed":7,"skipped":47,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Watchers
... skipping 14 lines ...
Jul  7 07:59:57.681: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5706  5545abc0-5722-4bed-a3e9-6ede84dbcf78 2258 0 2022-07-07 07:59:56 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-07-07 07:59:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul  7 07:59:57.682: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5706  5545abc0-5722-4bed-a3e9-6ede84dbcf78 2259 0 2022-07-07 07:59:56 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-07-07 07:59:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:187
Jul  7 07:59:57.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5706" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":349,"completed":8,"skipped":55,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jul  7 07:59:58.793: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72f1c793-a894-415d-a6fb-dea1b3c54ba5" in namespace "projected-125" to be "Succeeded or Failed"
Jul  7 07:59:58.903: INFO: Pod "downwardapi-volume-72f1c793-a894-415d-a6fb-dea1b3c54ba5": Phase="Pending", Reason="", readiness=false. Elapsed: 109.269031ms
Jul  7 08:00:01.014: INFO: Pod "downwardapi-volume-72f1c793-a894-415d-a6fb-dea1b3c54ba5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220066888s
Jul  7 08:00:03.031: INFO: Pod "downwardapi-volume-72f1c793-a894-415d-a6fb-dea1b3c54ba5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.237809782s
STEP: Saw pod success
Jul  7 08:00:03.031: INFO: Pod "downwardapi-volume-72f1c793-a894-415d-a6fb-dea1b3c54ba5" satisfied condition "Succeeded or Failed"
Jul  7 08:00:03.154: INFO: Trying to get logs from node capz-0jrudd-md-0-ms8xh pod downwardapi-volume-72f1c793-a894-415d-a6fb-dea1b3c54ba5 container client-container: <nil>
STEP: delete the pod
Jul  7 08:00:03.410: INFO: Waiting for pod downwardapi-volume-72f1c793-a894-415d-a6fb-dea1b3c54ba5 to disappear
Jul  7 08:00:03.519: INFO: Pod downwardapi-volume-72f1c793-a894-415d-a6fb-dea1b3c54ba5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jul  7 08:00:03.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-125" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":349,"completed":9,"skipped":108,"failed":0}
SSSSS
------------------------------
[sig-node] Sysctls [LinuxOnly] [NodeConformance] 
  should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  test/e2e/common/node/sysctl.go:159
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 7 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  test/e2e/common/node/sysctl.go:67
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  test/e2e/common/node/sysctl.go:159
STEP: Creating a pod with an ignorelisted, but not allowlisted sysctl on the node
STEP: Wait for pod failed reason
Jul  7 08:00:04.619: INFO: Waiting up to 5m0s for pod "sysctl-885565fd-be79-42f9-9ec1-e3ab3400fe74" in namespace "sysctl-2879" to be "failed with reason SysctlForbidden"
Jul  7 08:00:04.728: INFO: Pod "sysctl-885565fd-be79-42f9-9ec1-e3ab3400fe74": Phase="Failed", Reason="SysctlForbidden", readiness=false. Elapsed: 108.602285ms
Jul  7 08:00:04.728: INFO: Pod "sysctl-885565fd-be79-42f9-9ec1-e3ab3400fe74" satisfied condition "failed with reason SysctlForbidden"
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  test/e2e/framework/framework.go:187
Jul  7 08:00:04.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-2879" for this suite.
•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":349,"completed":10,"skipped":113,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Networking
... skipping 59 lines ...
Jul  7 08:00:30.649: INFO: ExecWithOptions: execute(POST https://capz-0jrudd-873c3541.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/pod-network-test-7815/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.154.196%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jul  7 08:00:31.431: INFO: Found all 1 expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:187
Jul  7 08:00:31.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7815" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":349,"completed":11,"skipped":136,"failed":0}
S
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jul  7 08:00:31.667: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/common/node/init_container.go:164
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: creating the pod
Jul  7 08:00:32.434: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:187
Jul  7 08:00:37.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8683" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":349,"completed":12,"skipped":137,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicationController
... skipping 18 lines ...
Jul  7 08:00:40.565: INFO: Trying to dial the pod
Jul  7 08:00:45.896: INFO: Controller my-hostname-basic-68039b78-ff9b-4487-b73c-c7dcf00dfe5e: Got expected result from replica 1 [my-hostname-basic-68039b78-ff9b-4487-b73c-c7dcf00dfe5e-w7nmd]: "my-hostname-basic-68039b78-ff9b-4487-b73c-c7dcf00dfe5e-w7nmd", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:187
Jul  7 08:00:45.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3985" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":349,"completed":13,"skipped":164,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test when running a container with a new image 
  should not be able to pull from private registry without secret [NodeConformance]
  test/e2e/common/node/runtime.go:381
[BeforeEach] [sig-node] Container Runtime
... skipping 9 lines ...
STEP: check the container status
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:187
Jul  7 08:00:50.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1803" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":349,"completed":14,"skipped":174,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 15 lines ...
Jul  7 08:00:56.018: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:647
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jul  7 08:01:10.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4134" for this suite.
STEP: Destroying namespace "webhook-4134-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":349,"completed":15,"skipped":188,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Networking
... skipping 61 lines ...
Jul  7 08:01:39.174: INFO: reached 192.168.154.199 after 0/1 tries
Jul  7 08:01:39.174: INFO: Going to retry 0 out of 2 pods....
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:187
Jul  7 08:01:39.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7556" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":349,"completed":16,"skipped":199,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Secrets
... skipping 11 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-node] Secrets
  test/e2e/framework/framework.go:187
Jul  7 08:01:40.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-762" for this suite.
•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":349,"completed":17,"skipped":215,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Lease 
  lease API should be available [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Lease
... skipping 6 lines ...
[It] lease API should be available [Conformance]
  test/e2e/framework/framework.go:647
[AfterEach] [sig-node] Lease
  test/e2e/framework/framework.go:187
Jul  7 08:01:43.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-9714" for this suite.
•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":349,"completed":18,"skipped":237,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  7 08:01:44.347: INFO: Waiting up to 5m0s for pod "pod-3c9fa60f-486f-4652-b83a-d7cebfd7adda" in namespace "emptydir-7807" to be "Succeeded or Failed"
Jul  7 08:01:44.456: INFO: Pod "pod-3c9fa60f-486f-4652-b83a-d7cebfd7adda": Phase="Pending", Reason="", readiness=false. Elapsed: 109.206631ms
{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2022-07-07T08:01:45Z"}
++ early_exit_handler
++ '[' -n 162 ']'
++ kill -TERM 162
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 4 lines ...