This job view page is being replaced by Spyglass soon. Check out the new job view.
PRadrianreber: Minimal checkpointing support
ResultABORTED
Tests 0 failed / 0 succeeded
Started2022-07-08 07:01
Elapsed47m45s
Revision9671d7dd3d4a83444b68e0f41097401a861e0f21
Refs 104907

No Test Failures!


Error lines from build-log.txt

... skipping 444 lines ...
I0708 07:17:40.685518     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0708 07:17:41.185818     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0708 07:17:41.685630     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0708 07:17:42.185894     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0708 07:17:42.686330     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0708 07:17:43.185507     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0708 07:17:45.755315     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2070 milliseconds
I0708 07:17:46.186833     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0708 07:17:46.687815     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0708 07:17:47.190035     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 4 milliseconds
I0708 07:17:47.691398     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 6 milliseconds
I0708 07:17:48.187373     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0708 07:17:48.687394     247 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 2 milliseconds
I0708 07:17:48.687519     247 uploadconfig.go:110] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[apiclient] All control plane components are healthy after 8.503849 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0708 07:17:48.691509     247 round_trippers.go:553] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 3 milliseconds
I0708 07:17:48.701362     247 round_trippers.go:553] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 9 milliseconds
... skipping 252 lines ...
+ export KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
+ export KUBE_CONTAINER_RUNTIME_NAME=containerd
+ GINKGO_PID=81989
+ wait 81989
+ ./hack/ginkgo-e2e.sh --provider=skeleton --num-nodes=2 --ginkgo.focus=. --ginkgo.skip=\[Serial\]|\[Slow\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|PodSecurityPolicy|LoadBalancer|load.balancer|Simple.pod.should.support.exec.through.an.HTTP.proxy|subPath.should.support.existing|NFS|nfs|inline.execution.and.attach|should.be.rejected.when.no.endpoints.exist --report-dir=/logs/artifacts --disable-log-dump=true
Conformance test: not doing test setup.
{"msg":"Test Suite starting","completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite - /home/prow/go/src/k8s.io/kubernetes/_output/local/bin/linux/amd64
=======================================================================================================
Random Seed: 1657264701 - will randomize all specs

Will run 2006 of 7047 specs
Running in parallel across 25 processes
------------------------------
[SynchronizedBeforeSuite] PASSED [30.057 seconds]
[SynchronizedBeforeSuite] 
test/e2e/e2e.go:76

  Begin Captured StdOut/StdErr Output >>
    {"msg":"Test Suite starting","completed":0,"skipped":0,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
... skipping 318 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:18:54.983: INFO: Only supported for providers [vsphere] (not skeleton)
... skipping 937 lines ...
------------------------------
• [0.463 seconds]
[sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
test/e2e/apimachinery/request_timeout.go:38

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","completed":1,"skipped":2,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 143 lines ...
------------------------------
• [0.617 seconds]
[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
test/e2e/common/storage/configmap_volume.go:503

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","completed":1,"skipped":4,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [6.496 seconds]
[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
test/e2e/network/endpointslicemirroring.go:53

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","completed":1,"skipped":2,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [0.123 seconds]
[sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]
test/e2e/kubectl/kubectl.go:1776

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","completed":2,"skipped":3,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [14.968 seconds]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/configmap_volume.go:61

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","completed":1,"skipped":3,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [20.634 seconds]
[sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
test/e2e/apps/deployment.go:97

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","completed":1,"skipped":19,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [20.633 seconds]
[sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/secrets_volume.go:56

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","completed":1,"skipped":13,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.211 seconds]
[sig-storage] In-tree Volumes
... skipping 33 lines ...
------------------------------
• [SLOW TEST] [21.022 seconds]
[sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","completed":1,"skipped":6,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [12.107 seconds]
[sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
test/e2e/common/node/security_context.go:141

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","completed":2,"skipped":6,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [30.669 seconds]
[sig-network] DNS should support configurable pod resolv.conf
test/e2e/network/dns.go:460

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","completed":1,"skipped":8,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [30.371 seconds]
[sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
test/e2e/kubectl/portforward.go:478

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","completed":2,"skipped":21,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [24.639 seconds]
[sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
test/e2e/apps/disruption.go:346

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","completed":3,"skipped":14,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 18 lines ...
------------------------------
• [SLOW TEST] [34.853 seconds]
[sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod
test/e2e/storage/pvc_protection.go:116

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","completed":1,"skipped":23,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
... skipping 68 lines ...
------------------------------
• [SLOW TEST] [16.499 seconds]
[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
test/e2e/common/storage/secrets_volume.go:78

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","completed":2,"skipped":7,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [38.446 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","completed":1,"skipped":12,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 68 lines ...
------------------------------
• [0.044 seconds]
[sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
test/e2e/apimachinery/table_conversion.go:129

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","completed":2,"skipped":24,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 18 lines ...
  Only supported for providers [azure] (not skeleton)
  In [BeforeEach] at: test/e2e/storage/drivers/in_tree.go:2079
------------------------------
SSSSSS
------------------------------
• [SLOW TEST] [38.904 seconds]
[sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
test/e2e/storage/persistent_volumes-local.go:381

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","completed":1,"skipped":18,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-file]
... skipping 20 lines ...
------------------------------
• [2.215 seconds]
[sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
test/e2e/common/node/runtimeclass.go:61

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]","completed":3,"skipped":20,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [0.239 seconds]
[sig-instrumentation] Events should manage the lifecycle of an event
test/e2e/instrumentation/core_events.go:135

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-instrumentation] Events should manage the lifecycle of an event","completed":4,"skipped":21,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: aws]
... skipping 20 lines ...
------------------------------
• [0.248 seconds]
[sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
test/e2e/kubectl/kubectl.go:1934

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","completed":5,"skipped":25,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [1.238 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]
test/e2e/apimachinery/custom_resource_definition.go:58

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","completed":2,"skipped":20,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [SLOW TEST] [17.491 seconds]
[sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]
test/e2e/kubectl/kubectl.go:1265

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","completed":3,"skipped":14,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 95 lines ...
------------------------------
• [SLOW TEST] [26.434 seconds]
[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
test/e2e/apps/replica_set.go:131

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","completed":2,"skipped":22,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [49.379 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:447

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","completed":1,"skipped":5,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [50.939 seconds]
[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
test/e2e/network/endpointslice.go:204

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","completed":1,"skipped":18,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [12.314 seconds]
[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/empty_dir.go:176

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":6,"skipped":35,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
... skipping 189 lines ...
------------------------------
• [SLOW TEST] [55.011 seconds]
[sig-network] Networking Granular Checks: Services should be able to handle large requests: http
test/e2e/network/networking.go:451

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: http","completed":1,"skipped":18,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [12.323 seconds]
[sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
test/e2e/common/storage/projected_downwardapi.go:234

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","completed":4,"skipped":35,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [26.679 seconds]
[sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
test/e2e/common/node/runtime.go:247

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","completed":3,"skipped":24,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [14.187 seconds]
[sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
test/e2e/common/storage/empty_dir.go:71

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","completed":3,"skipped":22,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
• [0.220 seconds]
[sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
test/e2e/common/storage/secrets_volume.go:385

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","completed":4,"skipped":27,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [24.325 seconds]
[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]
test/e2e/kubectl/kubectl.go:392

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","completed":3,"skipped":38,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:19:58.001: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 31 lines ...
------------------------------
• [SLOW TEST] [63.444 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","completed":1,"skipped":3,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [65.978 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
test/e2e/storage/testsuites/volumemode.go:354

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","completed":1,"skipped":27,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [67.396 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
test/e2e/storage/persistent_volumes-local.go:257

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","completed":1,"skipped":10,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: aws]
... skipping 43 lines ...
------------------------------
• [SLOW TEST] [36.491 seconds]
[sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","completed":4,"skipped":15,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [68.850 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
test/e2e/storage/testsuites/volumemode.go:354

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","completed":1,"skipped":1,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [12.429 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
test/e2e/apimachinery/webhook.go:238

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","completed":4,"skipped":27,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] CSI Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [72.339 seconds]
[sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
test/e2e/network/networking.go:416

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]","completed":1,"skipped":9,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [18.308 seconds]
[sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
test/e2e/common/storage/projected_configmap.go:374

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","completed":2,"skipped":19,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [16.303 seconds]
[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
test/e2e/apimachinery/resource_quota.go:150

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","completed":5,"skipped":31,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.003 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-file]
... skipping 120 lines ...
------------------------------
• [SLOW TEST] [10.312 seconds]
[sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
test/e2e/common/storage/downwardapi_volume.go:248

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","completed":5,"skipped":17,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [12.439 seconds]
[sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
test/e2e/common/node/runtime.go:173

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]","completed":2,"skipped":27,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [18.274 seconds]
[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
test/e2e/apimachinery/resource_quota.go:587

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","completed":2,"skipped":4,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 97 lines ...
------------------------------
• [SLOW TEST] [48.011 seconds]
[sig-network] Services should serve a basic endpoint from pods  [Conformance]
test/e2e/network/service.go:791

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","completed":2,"skipped":29,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [14.281 seconds]
[sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/empty_dir.go:96

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":3,"skipped":22,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] CSI Volumes
... skipping 97 lines ...
------------------------------
• [SLOW TEST] [11.759 seconds]
[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
test/e2e/apimachinery/resource_quota.go:90

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","completed":6,"skipped":51,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [17.065 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
test/e2e/apimachinery/crd_publish_openapi.go:391

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","completed":3,"skipped":39,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 43 lines ...
------------------------------
• [SLOW TEST] [43.235 seconds]
[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
test/e2e/network/service.go:2070

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","completed":7,"skipped":57,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
• [SLOW TEST] [95.996 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
test/e2e/storage/testsuites/subpath.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","completed":1,"skipped":12,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [10.239 seconds]
[sig-node] Pods should patch a pod status
test/e2e/common/node/pods.go:1074

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods should patch a pod status","completed":4,"skipped":49,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [35.697 seconds]
[sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
test/e2e/apimachinery/garbage_collector.go:439

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","completed":4,"skipped":47,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:20:33.708: INFO: Only supported for providers [azure] (not skeleton)
... skipping 33 lines ...
------------------------------
• [SLOW TEST] [50.539 seconds]
[sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
test/e2e/network/conntrack.go:295

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready","completed":2,"skipped":24,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [53.668 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
test/e2e/storage/persistent_volumes-local.go:240

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","completed":2,"skipped":18,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [0.174 seconds]
[sig-network] Services should test the lifecycle of an Endpoint [Conformance]
test/e2e/network/service.go:3144

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","completed":3,"skipped":19,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [105.517 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
test/e2e/storage/testsuites/volumes.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","completed":1,"skipped":7,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [10.372 seconds]
[sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
test/e2e/apps/deployment.go:150

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","completed":8,"skipped":61,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [106.632 seconds]
[sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
test/e2e/network/conntrack.go:363

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]","completed":2,"skipped":23,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 16 lines ...
  << End Captured GinkgoWriter Output

  Driver local doesn't support InlineVolume -- skipping
  In [BeforeEach] at: test/e2e/storage/framework/testsuite.go:116
------------------------------
• [2.099 seconds]
[sig-apps] Job should fail when exceeds active deadline
test/e2e/apps/job.go:293

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","completed":2,"skipped":14,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [10.090 seconds]
[sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
test/e2e/auth/service_accounts.go:56

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated","completed":5,"skipped":52,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
... skipping 79 lines ...
------------------------------
• [SLOW TEST] [43.701 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
test/e2e/storage/persistent_volumes-local.go:251

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","completed":2,"skipped":3,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 93 lines ...
------------------------------
• [SLOW TEST] [18.259 seconds]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/projected_configmap.go:77

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","completed":2,"skipped":12,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:20:49.341: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 10 lines ...
------------------------------
• [SLOW TEST] [73.815 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","completed":3,"skipped":29,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [46.577 seconds]
[sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
test/e2e/network/networking.go:461

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","completed":2,"skipped":12,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
• [SLOW TEST] [36.245 seconds]
[sig-apps] Job should manage the lifecycle of a job [Conformance]
test/e2e/apps/job.go:531

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Job should manage the lifecycle of a job [Conformance]","completed":3,"skipped":39,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
... skipping 18 lines ...
------------------------------
• [SLOW TEST] [98.596 seconds]
[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
test/e2e/storage/csi_mock_volume.go:517

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","completed":2,"skipped":19,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [0.259 seconds]
[sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
test/e2e/apimachinery/apply.go:376

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","completed":3,"skipped":27,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [0.919 seconds]
[sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
test/e2e/apimachinery/garbage_collector.go:491

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","completed":3,"skipped":17,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
• [SLOW TEST] [12.104 seconds]
[sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
test/e2e/common/node/kubelet.go:109

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","completed":3,"skipped":15,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [55.026 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
test/e2e/storage/testsuites/subpath.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","completed":5,"skipped":37,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [31.124 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
test/e2e/apimachinery/crd_conversion_webhook.go:149

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","completed":4,"skipped":44,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-file]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [99.670 seconds]
[sig-storage] PersistentVolumes-expansion  loopback local block volume should support online expansion on node
test/e2e/storage/local_volume_resize.go:85

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-expansion  loopback local block volume should support online expansion on node","completed":2,"skipped":8,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [0.774 seconds]
[sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
test/e2e/apimachinery/discovery.go:122

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","completed":3,"skipped":30,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [14.215 seconds]
[sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
test/e2e/common/storage/empty_dir.go:298

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","completed":4,"skipped":40,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [33.086 seconds]
[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]
test/e2e/kubectl/kubectl.go:1581

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","completed":4,"skipped":21,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [136.413 seconds]
[sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
test/e2e/storage/ephemeral_volume.go:57

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","completed":1,"skipped":1,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSSS
------------------------------
• [1.261 seconds]
[sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
test/e2e/kubectl/kubectl.go:845

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","completed":2,"skipped":12,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSS
------------------------------
• [SLOW TEST] [18.448 seconds]
[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
test/e2e/common/node/downwardapi.go:266

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","completed":4,"skipped":27,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [59.848 seconds]
[sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports
test/e2e/network/service.go:4178

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports","completed":6,"skipped":26,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:21:13.079: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 106 lines ...
------------------------------
• [SLOW TEST] [36.537 seconds]
[sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
test/e2e/apps/disruption.go:289

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","completed":3,"skipped":24,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [138.399 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
test/e2e/storage/testsuites/volumemode.go:354

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","completed":1,"skipped":8,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
• [SLOW TEST] [24.293 seconds]
[sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/common/storage/secrets_volume.go:46

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","completed":4,"skipped":42,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [26.177 seconds]
[sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
test/e2e/common/node/security_context.go:284

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","completed":3,"skipped":19,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [5.325 seconds]
[sig-apps] ReplicationController should release no longer matching pods [Conformance]
test/e2e/apps/rc.go:100

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","completed":3,"skipped":20,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [28.541 seconds]
[sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
test/e2e/kubectl/portforward.go:481

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","completed":4,"skipped":17,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [11.497 seconds]
[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
test/e2e/apimachinery/resource_quota.go:535

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","completed":4,"skipped":27,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:21:24.688: INFO: Only supported for providers [vsphere] (not skeleton)
... skipping 110 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:21:24.708: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.132 seconds]
[sig-node] NodeProblemDetector [BeforeEach]
test/e2e/node/node_problem_detector.go:54
  should run without error
  test/e2e/node/node_problem_detector.go:62

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [sig-node] NodeProblemDetector
      test/e2e/framework/framework.go:186
    STEP: Creating a kubernetes client 07/08/22 07:21:24.728
    Jul  8 07:21:24.728: INFO: >>> kubeConfig: /root/.kube/kind-test-config
    STEP: Building a namespace api object, basename node-problem-detector 07/08/22 07:21:24.73
    STEP: Waiting for a default service account to be provisioned in namespace 07/08/22 07:21:24.785
    STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/08/22 07:21:24.788
    [BeforeEach] [sig-node] NodeProblemDetector
      test/e2e/node/node_problem_detector.go:54
    Jul  8 07:21:24.792: INFO: No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory''
    [AfterEach] [sig-node] NodeProblemDetector
      test/e2e/framework/framework.go:187
    Jul  8 07:21:24.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "node-problem-detector-315" for this suite. 07/08/22 07:21:24.824
    [ReportAfterEach] TOP-LEVEL
      test/e2e/e2e_test.go:142
  << End Captured GinkgoWriter Output

  No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory''
  In [BeforeEach] at: test/e2e/node/node_problem_detector.go:55
------------------------------
S
------------------------------
• [SLOW TEST] [60.292 seconds]
[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
test/e2e/network/service.go:2117

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","completed":7,"skipped":52,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [14.193 seconds]
[sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/configmap_volume.go:56

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","completed":2,"skipped":15,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [14.800 seconds]
[sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]
test/e2e/auth/service_accounts.go:75

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","completed":5,"skipped":50,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
... skipping 43 lines ...
------------------------------
• [SLOW TEST] [33.342 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","completed":6,"skipped":37,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [158.688 seconds]
[sig-network] Services should implement service.kubernetes.io/service-proxy-name
test/e2e/network/service.go:2156

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","completed":1,"skipped":7,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [50.743 seconds]
[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
test/e2e/network/service.go:1436

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","completed":3,"skipped":18,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [84.513 seconds]
[sig-node] Probing container should be restarted startup probe fails
test/e2e/common/node/container_probe.go:317

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","completed":3,"skipped":27,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 68 lines ...
------------------------------
• [SLOW TEST] [37.059 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
test/e2e/storage/persistent_volumes-local.go:234

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","completed":4,"skipped":35,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:21:43.309: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 10 lines ...
------------------------------
• [SLOW TEST] [25.360 seconds]
[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/node/kubelet_etc_hosts.go:63

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","completed":4,"skipped":32,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [101.086 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
test/e2e/storage/testsuites/volumes.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","completed":2,"skipped":14,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
• [SLOW TEST] [10.412 seconds]
[sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/common/storage/configmap_volume.go:46

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","completed":2,"skipped":7,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
• [0.262 seconds]
[sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
test/e2e/apimachinery/protocol.go:48

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","completed":3,"skipped":14,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [19.175 seconds]
[sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
test/e2e/kubectl/kubectl.go:528

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command","completed":8,"skipped":53,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 122 lines ...
------------------------------
• [SLOW TEST] [66.509 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
test/e2e/storage/testsuites/volume_expand.go:159

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","completed":9,"skipped":64,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [1.757 seconds]
[sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
test/e2e/kubectl/kubectl.go:1253

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds","completed":10,"skipped":65,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
• [SLOW TEST] [36.714 seconds]
[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
test/e2e/network/dns.go:193

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","completed":5,"skipped":41,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 68 lines ...
------------------------------
• [SLOW TEST] [8.257 seconds]
[sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
test/e2e/apps/disruption.go:163

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","completed":5,"skipped":41,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 16 lines ...
  << End Captured GinkgoWriter Output

  Driver local doesn't support GenericEphemeralVolume -- skipping
  In [BeforeEach] at: test/e2e/storage/framework/testsuite.go:116
------------------------------
• [SLOW TEST] [50.994 seconds]
[sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
test/e2e/common/node/init_container.go:333

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","completed":5,"skipped":52,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [11.180 seconds]
[sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
test/e2e/apps/replica_set.go:165

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","completed":5,"skipped":51,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 145 lines ...
------------------------------
• [SLOW TEST] [60.102 seconds]
[sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
test/e2e/common/node/container_probe.go:104

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","completed":4,"skipped":21,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [18.113 seconds]
[sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
test/e2e/common/storage/downwardapi_volume.go:234

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","completed":4,"skipped":38,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
... skipping 45 lines ...
------------------------------
• [1.455 seconds]
[sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
test/e2e/apimachinery/garbage_collector.go:550

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","completed":5,"skipped":42,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [16.391 seconds]
[sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
test/e2e/apimachinery/resource_quota.go:793

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","completed":4,"skipped":54,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.107 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
... skipping 84 lines ...
------------------------------
• [SLOW TEST] [44.976 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","completed":7,"skipped":33,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [51.852 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
test/e2e/storage/testsuites/volumemode.go:354

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","completed":5,"skipped":41,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.104 seconds]
[sig-storage] CSI Volumes
... skipping 56 lines ...
------------------------------
• [SLOW TEST] [77.465 seconds]
[sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
test/e2e/storage/csi_mock_volume.go:360

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment","completed":6,"skipped":64,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [14.397 seconds]
[sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/downwardapi_volume.go:83

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","completed":6,"skipped":49,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [63.161 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
test/e2e/storage/testsuites/volume_expand.go:159

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","completed":5,"skipped":26,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
... skipping 118 lines ...
------------------------------
• [SLOW TEST] [92.691 seconds]
[sig-node] Mount propagation should propagate mounts within defined scopes
test/e2e/node/mount_propagation.go:85

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Mount propagation should propagate mounts within defined scopes","completed":3,"skipped":26,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [43.201 seconds]
[sig-node] Pods should support pod readiness gates [NodeConformance]
test/e2e/common/node/pods.go:770

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeConformance]","completed":7,"skipped":38,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [20.106 seconds]
[sig-apps] Job should apply changes to a job status [Conformance]
test/e2e/apps/job.go:464

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Job should apply changes to a job status [Conformance]","completed":5,"skipped":63,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [105.908 seconds]
[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
test/e2e/storage/csi_mock_volume.go:1660

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","completed":5,"skipped":55,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [23.868 seconds]
[sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]
test/e2e/kubectl/kubectl.go:1404

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","completed":8,"skipped":35,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [0.232 seconds]
[sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]
test/e2e/kubectl/kubectl.go:1801

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","completed":9,"skipped":35,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [37.854 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
test/e2e/storage/persistent_volumes-local.go:251

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","completed":4,"skipped":20,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [24.440 seconds]
[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
test/e2e/common/storage/downwardapi_volume.go:52

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","completed":6,"skipped":47,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 102 lines ...
------------------------------
• [SLOW TEST] [26.717 seconds]
[sig-apps] Deployment deployment should support proportional scaling [Conformance]
test/e2e/apps/deployment.go:160

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","completed":6,"skipped":49,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-windows] Services [BeforeEach]
... skipping 18 lines ...
------------------------------
• [0.222 seconds]
[sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
test/e2e/apimachinery/generated_clientset.go:219

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs","completed":7,"skipped":56,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [12.248 seconds]
[sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/downwardapi_volume.go:67

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","completed":6,"skipped":61,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
• [0.089 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
test/e2e/apimachinery/custom_resource_definition.go:198

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","completed":7,"skipped":67,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:22:31.209: INFO: Only supported for providers [azure] (not skeleton)
... skipping 10 lines ...
------------------------------
• [0.179 seconds]
[sig-node] Lease lease API should be available [Conformance]
test/e2e/common/node/lease.go:72

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","completed":8,"skipped":70,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [44.460 seconds]
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
test/e2e/common/node/lifecycle_hook.go:114

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","completed":11,"skipped":81,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [45.604 seconds]
[sig-network] Services should be rejected for evicted pods (no endpoints exist)
test/e2e/network/service.go:2304

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should be rejected for evicted pods (no endpoints exist)","completed":6,"skipped":47,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [SLOW TEST] [16.289 seconds]
[sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
test/e2e/kubectl/kubectl.go:960

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","completed":5,"skipped":24,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] CSI Volumes
... skipping 97 lines ...
------------------------------
• [SLOW TEST] [24.123 seconds]
[sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
test/e2e/common/node/containers.go:58

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]","completed":4,"skipped":29,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [0.142 seconds]
[sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]
test/e2e/network/ingressclass.go:200

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","completed":5,"skipped":32,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 68 lines ...
------------------------------
• [SLOW TEST] [24.160 seconds]
[sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
test/e2e/common/node/secrets.go:45

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","completed":8,"skipped":50,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [0.073 seconds]
[sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
test/e2e/common/node/node_lease.go:52

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace","completed":9,"skipped":51,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 91 lines ...
------------------------------
• [SLOW TEST] [21.035 seconds]
[sig-cli] Kubectl client Simple pod should support exec using resource/name
test/e2e/kubectl/kubectl.go:459

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","completed":10,"skipped":39,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.045 seconds]
[sig-autoscaling] DNS horizontal autoscaling [BeforeEach]
test/e2e/autoscaling/dns_autoscaling.go:59
  kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
... skipping 119 lines ...
------------------------------
• [SLOW TEST] [80.508 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
test/e2e/storage/testsuites/volumes.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","completed":3,"skipped":28,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [47.272 seconds]
[sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]
test/e2e/storage/volume_provisioning.go:712

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","completed":7,"skipped":66,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [26.239 seconds]
[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
test/e2e/common/node/pods.go:617

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","completed":7,"skipped":66,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
• [SLOW TEST] [18.243 seconds]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/projected_configmap.go:61

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","completed":9,"skipped":76,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [66.191 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","completed":3,"skipped":19,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 95 lines ...
------------------------------
• [0.448 seconds]
[sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
test/e2e/kubectl/kubectl.go:832

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","completed":4,"skipped":39,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [58.329 seconds]
[sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
test/e2e/apps/ttl_after_finished.go:48

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds","completed":6,"skipped":78,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [16.490 seconds]
[sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","completed":6,"skipped":53,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [91.284 seconds]
[sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
test/e2e/common/storage/secrets_volume.go:204

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","completed":5,"skipped":82,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [93.834 seconds]
[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
test/e2e/common/storage/configmap_volume.go:123

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","completed":5,"skipped":21,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 68 lines ...
------------------------------
• [SLOW TEST] [186.001 seconds]
[sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
test/e2e/storage/csi_mock_volume.go:668

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","completed":5,"skipped":60,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
• [SLOW TEST] [10.265 seconds]
[sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
test/e2e/common/node/expansion.go:43

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","completed":8,"skipped":70,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [10.280 seconds]
[sig-auth] ServiceAccounts should mount projected service account token [Conformance]
test/e2e/auth/service_accounts.go:272

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","completed":5,"skipped":43,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [1.739 seconds]
[sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
test/e2e/apimachinery/flowcontrol.go:58

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","completed":9,"skipped":70,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [1.227 seconds]
[sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
test/e2e/apps/replica_set.go:122

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","completed":6,"skipped":48,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [14.138 seconds]
[sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
test/e2e/common/node/containers.go:72

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]","completed":4,"skipped":37,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
S [SKIPPED] [0.072 seconds]
[sig-storage] Dynamic Provisioning
test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  test/e2e/storage/volume_provisioning.go:742
    [It] should report an error and create no PV
    test/e2e/storage/volume_provisioning.go:743

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [sig-storage] Dynamic Provisioning
      test/e2e/framework/framework.go:186
    STEP: Creating a kubernetes client 07/08/22 07:23:02.26
    Jul  8 07:23:02.260: INFO: >>> kubeConfig: /root/.kube/kind-test-config
    STEP: Building a namespace api object, basename volume-provisioning 07/08/22 07:23:02.261
    STEP: Waiting for a default service account to be provisioned in namespace 07/08/22 07:23:02.313
    STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 07/08/22 07:23:02.317
    [BeforeEach] [sig-storage] Dynamic Provisioning
      test/e2e/storage/volume_provisioning.go:146
    [It] should report an error and create no PV
      test/e2e/storage/volume_provisioning.go:743
    Jul  8 07:23:02.321: INFO: Only supported for providers [aws] (not skeleton)
    [AfterEach] [sig-storage] Dynamic Provisioning
      test/e2e/framework/framework.go:187
    Jul  8 07:23:02.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "volume-provisioning-8049" for this suite. 07/08/22 07:23:02.326
... skipping 29 lines ...
------------------------------
• [SLOW TEST] [24.314 seconds]
[sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/empty_dir.go:126

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":10,"skipped":60,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [47.522 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
test/e2e/storage/testsuites/volumemode.go:354

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","completed":6,"skipped":70,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.076 seconds]
[sig-storage] CSI Volumes
... skipping 33 lines ...
------------------------------
• [SLOW TEST] [11.865 seconds]
[sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
test/e2e/kubectl/kubectl.go:1110

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR","completed":6,"skipped":93,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [14.880 seconds]
[sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
test/e2e/kubectl/kubectl.go:1034

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema","completed":7,"skipped":70,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [0.262 seconds]
[sig-node] ConfigMap should update ConfigMap successfully
test/e2e/common/node/configmap.go:142

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","completed":8,"skipped":76,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [258.815 seconds]
[sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
test/e2e/common/node/container_probe.go:180

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","completed":1,"skipped":0,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [0.103 seconds]
[sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
test/e2e/node/pods.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","completed":2,"skipped":2,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [0.157 seconds]
[sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
test/e2e/instrumentation/monitoring/metrics_grabber.go:86

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","completed":3,"skipped":2,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 74 lines ...
------------------------------
• [SLOW TEST] [12.193 seconds]
[sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
test/e2e/common/node/secrets.go:94

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","completed":5,"skipped":48,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [90.818 seconds]
[sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
test/e2e/network/service.go:1803

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true","completed":9,"skipped":72,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [84.579 seconds]
[sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
test/e2e/storage/csi_mock_volume.go:668

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","completed":6,"skipped":55,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [12.208 seconds]
[sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
test/e2e/common/node/downwardapi.go:165

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","completed":7,"skipped":76,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [16.334 seconds]
[sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
test/e2e/common/storage/empty_dir.go:67

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","completed":7,"skipped":50,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: aws]
... skipping 18 lines ...
------------------------------
• [0.197 seconds]
[sig-instrumentation] Events API should delete a collection of events [Conformance]
test/e2e/instrumentation/events.go:207

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","completed":8,"skipped":51,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [1.164 seconds]
[sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
test/e2e/kubectl/kubectl.go:1362

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","completed":8,"skipped":76,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [43.864 seconds]
[sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
test/e2e/network/funny_ips.go:92

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal","completed":7,"skipped":62,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [24.422 seconds]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
test/e2e/common/storage/projected_configmap.go:108

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","completed":6,"skipped":65,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
• [SLOW TEST] [8.317 seconds]
[sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
test/e2e/common/node/expansion.go:72

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","completed":4,"skipped":10,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [68.929 seconds]
[sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
test/e2e/storage/testsuites/subpath.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","completed":6,"skipped":50,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [0.280 seconds]
[sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]
test/e2e/kubectl/kubectl.go:1239

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","completed":7,"skipped":51,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [11.481 seconds]
[sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
test/e2e/common/node/runtime.go:194

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","completed":10,"skipped":74,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [34.644 seconds]
[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
test/e2e/common/node/pods.go:443

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","completed":7,"skipped":83,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [46.102 seconds]
[sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
test/e2e/apps/job.go:254

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","completed":11,"skipped":51,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] CSI Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [22.319 seconds]
[sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
test/e2e/apps/disruption.go:140

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","completed":9,"skipped":81,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [12.290 seconds]
[sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/node/kubelet.go:199

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","completed":8,"skipped":66,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [6.910 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]
test/e2e/apimachinery/custom_resource_definition.go:85

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","completed":10,"skipped":86,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [72.012 seconds]
[sig-network] Networking Granular Checks: Services should function for node-Service: udp
test/e2e/network/networking.go:206

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: udp","completed":8,"skipped":72,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [131.853 seconds]
[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
test/e2e/storage/csi_mock_volume.go:1660

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","completed":6,"skipped":58,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
... skipping 68 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:23:40.381: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
------------------------------
• [SLOW TEST] [52.324 seconds]
[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]
test/e2e/kubectl/kubectl.go:350

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","completed":10,"skipped":79,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [71.503 seconds]
[sig-network] Networking Granular Checks: Services should function for pod-Service: udp
test/e2e/network/networking.go:162

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","completed":12,"skipped":81,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [19.854 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
test/e2e/apimachinery/crd_conversion_webhook.go:184

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","completed":8,"skipped":85,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [14.390 seconds]
[sig-node] Pods should get a host IP [NodeConformance] [Conformance]
test/e2e/common/node/pods.go:203

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","completed":9,"skipped":71,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [152.312 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
test/e2e/storage/testsuites/volume_expand.go:252

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","completed":4,"skipped":29,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [50.752 seconds]
[sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
test/e2e/common/node/container_probe.go:558

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe","completed":6,"skipped":31,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
... skipping 118 lines ...
------------------------------
• [SLOW TEST] [9.182 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
test/e2e/apimachinery/crd_publish_openapi.go:194

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","completed":9,"skipped":72,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [44.876 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
test/e2e/storage/persistent_volumes-local.go:240

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","completed":7,"skipped":95,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: aws]
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [75.711 seconds]
[sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
test/e2e/storage/csi_mock_volume.go:360

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","completed":6,"skipped":40,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [32.900 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1
test/e2e/storage/persistent_volumes-local.go:240

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","completed":12,"skipped":55,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [12.295 seconds]
[sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/projected_downwardapi.go:83

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","completed":7,"skipped":54,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 97 lines ...
------------------------------
• [SLOW TEST] [49.432 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
test/e2e/storage/persistent_volumes-local.go:251

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","completed":9,"skipped":51,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [45.342 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
test/e2e/storage/testsuites/volumes.go:198

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","completed":5,"skipped":24,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
... skipping 121 lines ...
------------------------------
• [SLOW TEST] [30.268 seconds]
[sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/node/security_context.go:90

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","completed":7,"skipped":70,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 75 lines ...
------------------------------
• [SLOW TEST] [68.006 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
test/e2e/storage/persistent_volumes-local.go:251

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","completed":11,"skipped":75,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [57.291 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
test/e2e/storage/persistent_volumes-local.go:257

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","completed":7,"skipped":61,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSS
------------------------------
S [SKIPPED] [0.101 seconds]
[sig-apps] ReplicaSet
... skipping 175 lines ...
------------------------------
• [SLOW TEST] [88.760 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:447

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","completed":8,"skipped":68,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [51.988 seconds]
[sig-storage] Volumes ConfigMap should be mountable
test/e2e/storage/volumes.go:50

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","completed":11,"skipped":84,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [82.249 seconds]
[sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
test/e2e/common/node/container_probe.go:244

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","completed":10,"skipped":71,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [0.482 seconds]
[sig-network] Services should prevent NodePort collisions
test/e2e/network/service.go:1473

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should prevent NodePort collisions","completed":11,"skipped":84,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [38.987 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
test/e2e/storage/persistent_volumes-local.go:240

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","completed":13,"skipped":81,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [23.521 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
test/e2e/apimachinery/crd_publish_openapi.go:309

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","completed":13,"skipped":61,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [0.364 seconds]
[sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
test/e2e/apimachinery/apply.go:485

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","completed":14,"skipped":63,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [0.169 seconds]
[sig-instrumentation] Events should delete a collection of events [Conformance]
test/e2e/instrumentation/core_events.go:250

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","completed":15,"skipped":68,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 93 lines ...
------------------------------
• [2.064 seconds]
[sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
test/e2e/common/node/security_context.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","completed":14,"skipped":85,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [0.191 seconds]
[sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
test/e2e/auth/service_accounts.go:646

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","completed":15,"skipped":100,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 120 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:24:26.907: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
------------------------------
• [SLOW TEST] [65.675 seconds]
[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
test/e2e/network/conntrack.go:132

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","completed":7,"skipped":71,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
... skipping 66 lines ...
------------------------------
• [0.128 seconds]
[sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]
test/e2e/kubectl/kubectl.go:1674

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","completed":8,"skipped":75,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [21.015 seconds]
[sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
test/e2e/common/node/downwardapi.go:216

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","completed":6,"skipped":42,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
... skipping 45 lines ...
------------------------------
• [0.083 seconds]
[sig-node] NodeLease NodeLease should have OwnerReferences set
test/e2e/common/node/node_lease.go:90

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] NodeLease NodeLease should have OwnerReferences set","completed":7,"skipped":55,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [20.214 seconds]
[sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
test/e2e/common/node/security_context.go:219

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","completed":8,"skipped":89,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [71.547 seconds]
[sig-network] Networking should check kube-proxy urls
test/e2e/network/networking.go:132

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking should check kube-proxy urls","completed":8,"skipped":52,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [0.128 seconds]
[sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
test/e2e/common/node/runtimeclass.go:156

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]","completed":9,"skipped":56,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [50.915 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
test/e2e/storage/testsuites/volumes.go:198

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","completed":10,"skipped":75,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [12.139 seconds]
[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/secrets_volume.go:88

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","completed":9,"skipped":78,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] CSI Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [18.172 seconds]
[sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
test/e2e/node/security_context.go:163

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","completed":12,"skipped":85,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
• [SLOW TEST] [89.192 seconds]
[sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
test/e2e/common/storage/projected_secret.go:214

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","completed":6,"skipped":63,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
... skipping 95 lines ...
------------------------------
• [SLOW TEST] [30.904 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","completed":12,"skipped":77,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [36.238 seconds]
[sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
test/e2e/apps/disruption.go:289

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","completed":8,"skipped":82,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [28.863 seconds]
[sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
test/e2e/common/storage/projected_downwardapi.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","completed":12,"skipped":100,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [6.759 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
test/e2e/apimachinery/crd_publish_openapi.go:153

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","completed":13,"skipped":77,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 43 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:24:51.113: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 56 lines ...
------------------------------
• [SLOW TEST] [11.174 seconds]
[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
test/e2e/apimachinery/resource_quota.go:438

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","completed":10,"skipped":84,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [65.317 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:447

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","completed":9,"skipped":87,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [18.395 seconds]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/downwardapi_volume.go:108

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","completed":9,"skipped":90,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [SLOW TEST] [18.377 seconds]
[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
test/e2e/common/storage/projected_secret.go:45

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","completed":10,"skipped":60,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 91 lines ...
------------------------------
• [1.396 seconds]
[sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
test/e2e/apps/rc.go:82

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","completed":10,"skipped":87,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [0.194 seconds]
[sig-node] PodTemplates should replace a pod template [Conformance]
test/e2e/common/node/podtemplates.go:176

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] PodTemplates should replace a pod template [Conformance]","completed":11,"skipped":96,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [8.926 seconds]
[sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]
test/e2e/kubectl/kubectl.go:1641

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","completed":14,"skipped":83,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 95 lines ...
------------------------------
• [SLOW TEST] [47.116 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
test/e2e/storage/persistent_volumes-local.go:234

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","completed":9,"skipped":71,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [18.234 seconds]
[sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
test/e2e/storage/empty_dir_wrapper.go:67

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","completed":13,"skipped":102,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [19.320 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
test/e2e/apimachinery/webhook.go:290

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","completed":9,"skipped":83,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [26.155 seconds]
[sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
test/e2e/common/node/expansion.go:91

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","completed":7,"skipped":74,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [30.468 seconds]
[sig-node] Pods should delete a collection of pods [Conformance]
test/e2e/common/node/pods.go:844

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","completed":13,"skipped":89,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [65.001 seconds]
[sig-network] Networking Granular Checks: Services should function for node-Service: http
test/e2e/network/networking.go:192

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: http","completed":8,"skipped":70,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [33.214 seconds]
[sig-apps] Deployment deployment should delete old replica sets [Conformance]
test/e2e/apps/deployment.go:122

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","completed":11,"skipped":76,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [198.160 seconds]
[sig-node] Pods Extended Pod Container Status should never report success for a pending container
test/e2e/node/pods.go:208

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","completed":5,"skipped":21,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [6.127 seconds]
[sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
test/e2e/common/storage/projected_secret.go:77

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","completed":8,"skipped":77,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [0.044 seconds]
[sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
test/e2e/apimachinery/table_conversion.go:154

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","completed":9,"skipped":81,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [24.378 seconds]
[sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
test/e2e/common/storage/downwardapi_volume.go:220

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","completed":11,"skipped":67,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [93.765 seconds]
[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
test/e2e/storage/csi_mock_volume.go:1602

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","completed":10,"skipped":71,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [19.227 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
test/e2e/apimachinery/webhook.go:220

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","completed":10,"skipped":74,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [14.223 seconds]
[sig-storage] HostPath should support r/w [NodeConformance]
test/e2e/common/storage/host_path.go:67

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","completed":14,"skipped":91,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [32.498 seconds]
[sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
test/e2e/kubectl/portforward.go:456

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","completed":12,"skipped":97,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [106.986 seconds]
[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled
test/e2e/storage/csi_mock_volume.go:1377

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","completed":11,"skipped":80,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [32.825 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
test/e2e/storage/persistent_volumes-local.go:234

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","completed":15,"skipped":114,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 68 lines ...
------------------------------
• [SLOW TEST] [26.978 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
test/e2e/apimachinery/webhook.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","completed":10,"skipped":83,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [SLOW TEST] [16.507 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
test/e2e/apimachinery/webhook.go:251

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","completed":12,"skipped":80,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [41.386 seconds]
[sig-node] PreStop should call prestop when killing a pod  [Conformance]
test/e2e/node/pre_stop.go:168

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","completed":10,"skipped":103,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [26.235 seconds]
[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","completed":9,"skipped":70,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 216 lines ...
------------------------------
• [SLOW TEST] [91.675 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
test/e2e/storage/testsuites/ephemeral.go:315

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes","completed":10,"skipped":52,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [37.777 seconds]
[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
test/e2e/apps/deployment.go:185

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","completed":14,"skipped":105,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [52.484 seconds]
[sig-storage] CSI mock volume storage capacity unlimited
test/e2e/storage/csi_mock_volume.go:1158

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","completed":11,"skipped":86,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
... skipping 93 lines ...
------------------------------
• [SLOW TEST] [12.186 seconds]
[sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
test/e2e/common/storage/downwardapi_volume.go:260

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","completed":16,"skipped":131,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [32.067 seconds]
[sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
test/e2e/apps/job.go:194

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]","completed":6,"skipped":37,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 66 lines ...
------------------------------
• [SLOW TEST] [221.284 seconds]
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
test/e2e/apps/statefulset.go:315

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","completed":7,"skipped":49,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [79.584 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
test/e2e/storage/testsuites/ephemeral.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","completed":16,"skipped":133,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [22.248 seconds]
[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]
test/e2e/apps/rc.go:66

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","completed":15,"skipped":94,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [122.823 seconds]
[sig-storage] CSI mock volume storage capacity exhausted, immediate binding
test/e2e/storage/csi_mock_volume.go:1158

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","completed":7,"skipped":57,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [20.209 seconds]
[sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
test/e2e/apimachinery/watch.go:60

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","completed":11,"skipped":98,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [0.062 seconds]
[sig-node] Secrets should patch a secret [Conformance]
test/e2e/common/node/secrets.go:153

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","completed":12,"skipped":98,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 95 lines ...
------------------------------
• [SLOW TEST] [8.188 seconds]
[sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
test/e2e/common/storage/downwardapi_volume.go:192

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","completed":17,"skipped":144,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 118 lines ...
------------------------------
• [0.199 seconds]
[sig-node] PodTemplates should delete a collection of pod templates [Conformance]
test/e2e/common/node/podtemplates.go:122

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","completed":18,"skipped":174,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
• [SLOW TEST] [136.159 seconds]
[sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
test/e2e/storage/ephemeral_volume.go:57

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","completed":11,"skipped":87,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [10.307 seconds]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
test/e2e/common/storage/configmap_volume.go:112

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","completed":8,"skipped":56,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 145 lines ...
------------------------------
• [0.647 seconds]
[sig-network] Services should check NodePort out-of-range
test/e2e/network/service.go:1527

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should check NodePort out-of-range","completed":9,"skipped":84,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [35.764 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
test/e2e/storage/persistent_volumes-local.go:251

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","completed":11,"skipped":73,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [160.517 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
test/e2e/storage/testsuites/ephemeral.go:216

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs","completed":9,"skipped":82,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [94.294 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
test/e2e/storage/testsuites/ephemeral.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","completed":16,"skipped":82,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [15.636 seconds]
[sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]
test/e2e/kubectl/kubectl.go:1498

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","completed":7,"skipped":41,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [133.097 seconds]
[sig-network] Networking Granular Checks: Services should update endpoints: http
test/e2e/network/networking.go:328

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","completed":5,"skipped":30,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 145 lines ...
------------------------------
• [SLOW TEST] [12.528 seconds]
[sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]
test/e2e/kubectl/kubectl.go:1702

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","completed":16,"skipped":96,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
... skipping 93 lines ...
------------------------------
• [0.612 seconds]
[sig-network] Netpol API should support creating NetworkPolicy API operations
test/e2e/network/netpol/network_policy_api.go:50

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","completed":8,"skipped":42,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [92.235 seconds]
[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
test/e2e/apps/cronjob.go:69

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","completed":8,"skipped":63,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [0.755 seconds]
[sig-network] Ingress API should support creating Ingress API operations [Conformance]
test/e2e/network/ingress.go:552

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","completed":6,"skipped":55,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
• [0.084 seconds]
[sig-api-machinery] server version should find the server version [Conformance]
test/e2e/apimachinery/server_version.go:39

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","completed":7,"skipped":62,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [0.732 seconds]
[sig-storage] CSIStorageCapacity  should support CSIStorageCapacities API operations [Conformance]
test/e2e/storage/csistoragecapacity.go:49

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSIStorageCapacity  should support CSIStorageCapacities API operations [Conformance]","completed":8,"skipped":65,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 95 lines ...
------------------------------
• [SLOW TEST] [18.854 seconds]
[sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
test/e2e/storage/testsuites/volumes.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","completed":12,"skipped":98,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [34.475 seconds]
[sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]
test/e2e/storage/subpath.go:106

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]","completed":12,"skipped":89,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [7.298 seconds]
[sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
test/e2e/apimachinery/resource_quota.go:65

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","completed":10,"skipped":94,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [SLOW TEST] [8.191 seconds]
[sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
test/e2e/node/security_context.go:171

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","completed":12,"skipped":79,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] CSI Volumes
... skipping 118 lines ...
------------------------------
• [SLOW TEST] [34.529 seconds]
[sig-network] Services should create endpoints for unready pods
test/e2e/network/service.go:1657

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should create endpoints for unready pods","completed":13,"skipped":81,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 145 lines ...
------------------------------
• [SLOW TEST] [8.171 seconds]
[sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
test/e2e/common/node/security_context.go:229

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","completed":11,"skipped":112,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [SLOW TEST] [14.255 seconds]
[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
test/e2e/node/security_context.go:132

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","completed":9,"skipped":52,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 122 lines ...
------------------------------
• [SLOW TEST] [14.277 seconds]
[sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/empty_dir.go:206

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":9,"skipped":65,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
... skipping 18 lines ...
------------------------------
• [SLOW TEST] [13.321 seconds]
[sig-api-machinery] Garbage collector should support cascading deletion of custom resources
test/e2e/apimachinery/garbage_collector.go:905

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","completed":9,"skipped":83,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [36.764 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
test/e2e/storage/testsuites/subpath.go:367

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","completed":11,"skipped":56,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [8.092 seconds]
[sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/empty_dir.go:106

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":14,"skipped":106,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [12.204 seconds]
[sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
test/e2e/common/storage/projected_downwardapi.go:220

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","completed":13,"skipped":102,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (delayed binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:26:17.707: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 8 lines ...
------------------------------
• [SLOW TEST] [33.275 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
test/e2e/storage/persistent_volumes-local.go:240

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","completed":17,"skipped":133,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [0.161 seconds]
[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
test/e2e/common/node/configmap.go:168

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","completed":18,"skipped":145,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 104 lines ...
------------------------------
• [SLOW TEST] [20.183 seconds]
[sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
test/e2e/kubectl/portforward.go:492

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","completed":17,"skipped":84,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
... skipping 93 lines ...
------------------------------
• [SLOW TEST] [20.571 seconds]
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
test/e2e/apps/statefulset.go:906

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","completed":13,"skipped":100,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [8.455 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
test/e2e/apimachinery/crd_publish_openapi.go:236

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","completed":10,"skipped":66,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [0.253 seconds]
[sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
test/e2e/apimachinery/table_conversion.go:80

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","completed":11,"skipped":67,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
• [SLOW TEST] [62.900 seconds]
[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
test/e2e/storage/csi_mock_volume.go:517

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","completed":11,"skipped":82,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [60.242 seconds]
[sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
test/e2e/node/pods.go:302

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal","completed":13,"skipped":98,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: aws]
... skipping 64 lines ...
------------------------------
• [SLOW TEST] [14.207 seconds]
[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
test/e2e/network/proxy.go:286

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","completed":12,"skipped":57,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:26:30.385: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 200 lines ...
------------------------------
• [SLOW TEST] [30.660 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
test/e2e/storage/testsuites/volumes.go:198

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","completed":17,"skipped":110,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [16.532 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
test/e2e/apimachinery/crd_publish_openapi.go:442

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","completed":19,"skipped":165,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [10.126 seconds]
[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]
test/e2e/apps/replica_set.go:111

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","completed":12,"skipped":88,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSSS
------------------------------
• [SLOW TEST] [10.159 seconds]
[sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
test/e2e/common/node/downwardapi.go:110

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","completed":14,"skipped":101,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [0.424 seconds]
[sig-api-machinery] health handlers should contain necessary checks
test/e2e/apimachinery/health_handlers.go:122

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","completed":15,"skipped":103,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [0.058 seconds]
[sig-network] Services should find a service from listing all namespaces [Conformance]
test/e2e/network/service.go:3119

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","completed":16,"skipped":107,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [23.195 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
test/e2e/apimachinery/webhook.go:380

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","completed":15,"skipped":109,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [0.098 seconds]
[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
test/e2e/network/endpointslice.go:65

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","completed":16,"skipped":114,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] CSI Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [18.317 seconds]
[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:447

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","completed":14,"skipped":100,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.048 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
... skipping 29 lines ...
------------------------------
• [SLOW TEST] [32.882 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:447

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","completed":12,"skipped":141,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [49.530 seconds]
[sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
test/e2e/storage/csi_mock_volume.go:360

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","completed":12,"skipped":96,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [SLOW TEST] [13.328 seconds]
[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
test/e2e/apimachinery/resource_quota.go:220

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","completed":18,"skipped":115,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [66.974 seconds]
[sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
test/e2e/common/storage/projected_configmap.go:123

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","completed":10,"skipped":100,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [30.321 seconds]
[sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
test/e2e/storage/pvc_protection.go:147

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","completed":10,"skipped":86,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [0.097 seconds]
[sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
test/e2e/architecture/conformance.go:38

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]","completed":11,"skipped":95,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [6.188 seconds]
[sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
test/e2e/node/security_context.go:185

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","completed":17,"skipped":126,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [52.007 seconds]
[sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
test/e2e/network/networking.go:384

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality","completed":10,"skipped":92,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 122 lines ...
------------------------------
• [SLOW TEST] [36.380 seconds]
[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
test/e2e/network/dns.go:290

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","completed":10,"skipped":97,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [100.886 seconds]
[sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
test/e2e/network/service.go:1922

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false","completed":12,"skipped":77,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [2.484 seconds]
[sig-api-machinery] Discovery Custom resource should have storage version hash
test/e2e/apimachinery/discovery.go:79

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","completed":11,"skipped":98,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:26:54.527: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 131 lines ...
------------------------------
• [0.167 seconds]
[sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
test/e2e/apimachinery/protocol.go:48

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","completed":12,"skipped":123,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [57.743 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
test/e2e/storage/testsuites/volumemode.go:354

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","completed":13,"skipped":95,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 193 lines ...
------------------------------
• [SLOW TEST] [16.233 seconds]
[sig-apps] CronJob should be able to schedule after more than 100 missed schedule
test/e2e/apps/cronjob.go:191

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","completed":11,"skipped":114,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 141 lines ...
------------------------------
• [SLOW TEST] [13.575 seconds]
[sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
test/e2e/apimachinery/garbage_collector.go:650

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","completed":13,"skipped":78,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [79.000 seconds]
[sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
test/e2e/network/networking.go:283

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector","completed":19,"skipped":179,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [210.901 seconds]
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
test/e2e/apps/statefulset.go:507

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","completed":8,"skipped":105,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [63.069 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1
test/e2e/storage/persistent_volumes-local.go:234

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","completed":18,"skipped":101,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 22 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:27:23.923: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 83 lines ...
------------------------------
• [SLOW TEST] [49.475 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
test/e2e/storage/testsuites/subpath.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","completed":17,"skipped":110,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 68 lines ...
------------------------------
• [SLOW TEST] [63.572 seconds]
[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
test/e2e/network/hostport.go:63

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","completed":12,"skipped":74,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 95 lines ...
------------------------------
• [SLOW TEST] [54.353 seconds]
[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
test/e2e/network/service.go:1394

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","completed":20,"skipped":168,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 68 lines ...
------------------------------
• [SLOW TEST] [98.138 seconds]
[sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
test/e2e/common/storage/downwardapi_volume.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","completed":13,"skipped":109,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [17.232 seconds]
[sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
test/e2e/common/node/runtime.go:381

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","completed":19,"skipped":118,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [55.113 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
test/e2e/storage/testsuites/volumes.go:198

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","completed":12,"skipped":97,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 97 lines ...
------------------------------
• [SLOW TEST] [14.217 seconds]
[sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
test/e2e/common/node/pods.go:535

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","completed":21,"skipped":184,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [42.486 seconds]
[sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]
test/e2e/storage/subpath.go:60

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]","completed":12,"skipped":134,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [22.212 seconds]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
test/e2e/common/storage/projected_configmap.go:73

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","completed":9,"skipped":108,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
• [SLOW TEST] [69.485 seconds]
[sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
test/e2e/kubectl/kubectl.go:542

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command","completed":13,"skipped":99,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [154.306 seconds]
[sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
test/e2e/storage/csi_mock_volume.go:765

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","completed":10,"skipped":84,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [92.855 seconds]
[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
test/e2e/network/service.go:2086

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","completed":14,"skipped":103,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [27.209 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
test/e2e/storage/testsuites/volumelimits.go:249

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits","completed":13,"skipped":96,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:27:55.205: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 33 lines ...
------------------------------
• [SLOW TEST] [88.765 seconds]
[sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
test/e2e/common/node/container_probe.go:131

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","completed":13,"skipped":78,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [74.666 seconds]
[sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
test/e2e/kubectl/kubectl.go:547

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command","completed":13,"skipped":118,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [68.671 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
test/e2e/storage/persistent_volumes-local.go:251

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","completed":11,"skipped":127,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 24 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-file]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:28:00.277: INFO: Only supported for providers [azure] (not skeleton)
... skipping 8 lines ...
------------------------------
• [SLOW TEST] [29.223 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","completed":14,"skipped":129,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 145 lines ...
------------------------------
• [SLOW TEST] [134.097 seconds]
[sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
test/e2e/storage/ephemeral_volume.go:57

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","completed":8,"skipped":61,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [81.580 seconds]
[sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
test/e2e/common/storage/configmap_volume.go:239

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","completed":18,"skipped":127,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 45 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:28:08.416: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 10 lines ...
------------------------------
• [SLOW TEST] [19.791 seconds]
[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
test/e2e/apimachinery/garbage_collector.go:735

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","completed":9,"skipped":61,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
... skipping 89 lines ...
------------------------------
• [SLOW TEST] [48.185 seconds]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/configmap_volume.go:98

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","completed":10,"skipped":114,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [34.155 seconds]
[sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
test/e2e/common/node/security_context.go:369

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","completed":12,"skipped":135,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
• [SLOW TEST] [38.303 seconds]
[sig-network] DNS should support configurable pod DNS nameservers [Conformance]
test/e2e/network/dns.go:411

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","completed":14,"skipped":120,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
... skipping 89 lines ...
------------------------------
• [SLOW TEST] [36.279 seconds]
[sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
test/e2e/common/storage/projected_downwardapi.go:192

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","completed":19,"skipped":138,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [63.325 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
test/e2e/apimachinery/crd_watch.go:51

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","completed":15,"skipped":129,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
... skipping 18 lines ...
------------------------------
• [SLOW TEST] [130.782 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
test/e2e/storage/testsuites/provisioning.go:525

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node","completed":13,"skipped":149,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [116.381 seconds]
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
test/e2e/common/node/lifecycle_hook.go:130

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","completed":14,"skipped":134,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [67.737 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
test/e2e/storage/testsuites/volumes.go:198

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","completed":11,"skipped":85,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
... skipping 20 lines ...
------------------------------
• [0.530 seconds]
[sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource 
test/e2e/network/proxy.go:86

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","completed":12,"skipped":88,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [0.360 seconds]
[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]
test/e2e/auth/service_accounts.go:158

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","completed":13,"skipped":97,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [135.027 seconds]
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
test/e2e/storage/testsuites/ephemeral.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","completed":19,"skipped":127,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
------------------------------
• [SLOW TEST] [28.983 seconds]
[sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
test/e2e/apps/rc.go:109

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","completed":13,"skipped":140,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [0.235 seconds]
[sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
test/e2e/common/node/podtemplates.go:53

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","completed":14,"skipped":158,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [116.715 seconds]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/network/networking.go:105

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","completed":14,"skipped":78,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 218 lines ...
------------------------------
• [0.266 seconds]
[sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
test/e2e/apimachinery/apply.go:71

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","completed":15,"skipped":122,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [210.764 seconds]
[sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
test/e2e/storage/csi_mock_volume.go:517

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","completed":11,"skipped":109,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [0.196 seconds]
[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]
test/e2e/kubectl/kubectl.go:822

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","completed":12,"skipped":118,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [18.128 seconds]
[sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
test/e2e/common/node/security_context.go:153

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","completed":16,"skipped":130,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [12.149 seconds]
[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/projected_downwardapi.go:67

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","completed":16,"skipped":128,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [18.196 seconds]
[sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
test/e2e/node/security_context.go:111

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","completed":14,"skipped":98,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 118 lines ...
------------------------------
• [SLOW TEST] [145.405 seconds]
[sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
test/e2e/apps/statefulset.go:1167

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled","completed":13,"skipped":123,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
... skipping 43 lines ...
------------------------------
• [SLOW TEST] [40.180 seconds]
[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]
test/e2e/storage/subpath.go:92

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]","completed":20,"skipped":144,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [SLOW TEST] [178.008 seconds]
[sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
test/e2e/storage/csi_mock_volume.go:1660

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","completed":15,"skipped":101,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [117.426 seconds]
[sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
test/e2e/storage/csi_mock_volume.go:668

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","completed":13,"skipped":152,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [22.441 seconds]
[sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
test/e2e/apps/deployment.go:113

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","completed":14,"skipped":127,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [149.192 seconds]
[sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
test/e2e/storage/csi_mock_volume.go:765

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","completed":20,"skipped":179,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [19.570 seconds]
[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
test/e2e/common/node/configmap.go:44

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","completed":21,"skipped":160,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [31.258 seconds]
[sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
test/e2e/apps/replica_set.go:176

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","completed":17,"skipped":129,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
• [1.105 seconds]
[sig-api-machinery] Aggregator should manage the lifecycle of an APIService
test/e2e/apimachinery/aggregator.go:112

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Aggregator should manage the lifecycle of an APIService","completed":18,"skipped":135,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [38.576 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
test/e2e/storage/persistent_volumes-local.go:257

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","completed":17,"skipped":131,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSS
------------------------------
• [SLOW TEST] [56.860 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","completed":14,"skipped":149,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [10.339 seconds]
[sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/projected_configmap.go:56

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","completed":15,"skipped":136,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [36.205 seconds]
[sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running 
test/e2e/node/events.go:41

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running ","completed":15,"skipped":123,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [80.627 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","completed":11,"skipped":114,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.046 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
... skipping 29 lines ...
------------------------------
• [SLOW TEST] [14.147 seconds]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
test/e2e/common/storage/configmap_volume.go:88

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","completed":21,"skipped":180,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [8.108 seconds]
[sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
test/e2e/common/node/security_context.go:101

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","completed":16,"skipped":129,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [12.215 seconds]
[sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
test/e2e/common/storage/secrets_volume.go:124

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","completed":18,"skipped":139,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [12.330 seconds]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
test/e2e/common/storage/configmap_volume.go:73

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","completed":15,"skipped":169,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 93 lines ...
------------------------------
• [SLOW TEST] [67.280 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","completed":20,"skipped":137,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 97 lines ...
------------------------------
• [SLOW TEST] [130.201 seconds]
[sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
test/e2e/storage/testsuites/volumes.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","completed":14,"skipped":86,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
... skipping 18 lines ...
------------------------------
• [SLOW TEST] [12.297 seconds]
[sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
test/e2e/common/storage/configmap_volume.go:422

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","completed":22,"skipped":188,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [0.131 seconds]
[sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
test/e2e/apimachinery/request_timeout.go:70

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","completed":23,"skipped":195,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [66.294 seconds]
[sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","completed":15,"skipped":159,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [35.844 seconds]
[sig-cli] Kubectl client Simple pod should contain last line of the log
test/e2e/kubectl/kubectl.go:651

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","completed":14,"skipped":157,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 43 lines ...
  Only supported for providers [aws] (not skeleton)
  In [BeforeEach] at: test/e2e/storage/drivers/in_tree.go:1722
------------------------------
SS
------------------------------
• [SLOW TEST] [82.162 seconds]
[sig-apps] CronJob should delete failed finished jobs with limit of one job
test/e2e/apps/cronjob.go:291

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","completed":15,"skipped":140,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 118 lines ...
------------------------------
• [SLOW TEST] [18.379 seconds]
[sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
test/e2e/network/kube_proxy.go:54

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","completed":21,"skipped":150,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 93 lines ...
------------------------------
• [0.247 seconds]
[sig-network] Services should delete a collection of services [Conformance]
test/e2e/network/service.go:3554

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","completed":22,"skipped":167,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [47.397 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
test/e2e/storage/testsuites/volumemode.go:354

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","completed":16,"skipped":107,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [21.009 seconds]
[sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
test/e2e/apimachinery/chunking.go:79

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","completed":15,"skipped":87,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.004 seconds]
[sig-storage] In-tree Volumes
... skipping 122 lines ...
------------------------------
• [SLOW TEST] [38.386 seconds]
[sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately
test/e2e/storage/pvc_protection.go:128

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","completed":12,"skipped":115,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] CSI Volumes
... skipping 72 lines ...
------------------------------
• [SLOW TEST] [28.147 seconds]
[sig-apps] Job should run a job to completion when tasks succeed
test/e2e/apps/job.go:81

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","completed":16,"skipped":159,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 68 lines ...
------------------------------
• [SLOW TEST] [20.932 seconds]
[sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort
test/e2e/network/service.go:4124

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort","completed":15,"skipped":165,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
... skipping 141 lines ...
------------------------------
• [SLOW TEST] [14.170 seconds]
[sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
test/e2e/common/node/security_context.go:131

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","completed":23,"skipped":170,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [49.119 seconds]
[sig-network] Networking Granular Checks: Services should function for pod-Service: http
test/e2e/network/networking.go:147

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: http","completed":16,"skipped":140,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [35.149 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:447

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","completed":24,"skipped":195,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
... skipping 45 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Dynamic PV (immediate binding)] topology [BeforeEach]
    test/e2e/storage/framework/testsuite.go:51
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies
      test/e2e/storage/testsuites/topology.go:194

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
      test/e2e/storage/framework/testsuite.go:51
    Jul  8 07:30:44.873: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 10 lines ...
------------------------------
• [1.220 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
test/e2e/auth/certificates.go:200

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","completed":25,"skipped":202,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [16.650 seconds]
[sig-network] Services should release NodePorts on delete
test/e2e/network/service.go:1594

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should release NodePorts on delete","completed":16,"skipped":115,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-file]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [14.281 seconds]
[sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
test/e2e/common/storage/empty_dir.go:59

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","completed":13,"skipped":138,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [1.061 seconds]
[sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
test/e2e/kubectl/kubectl.go:1186

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object","completed":17,"skipped":118,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
... skipping 68 lines ...
------------------------------
• [SLOW TEST] [308.005 seconds]
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
test/e2e/apps/statefulset.go:132

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","completed":15,"skipped":114,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 108 lines ...
------------------------------
• [SLOW TEST] [53.174 seconds]
[sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
test/e2e/network/networking.go:236

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http","completed":17,"skipped":132,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.039 seconds]
[sig-storage] CSI Volumes
... skipping 108 lines ...
------------------------------
• [SLOW TEST] [33.688 seconds]
[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
test/e2e/apimachinery/aggregator.go:107

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","completed":17,"skipped":108,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [0.251 seconds]
[sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
test/e2e/apimachinery/watch.go:142

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","completed":18,"skipped":110,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [201.048 seconds]
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
test/e2e/apps/statefulset.go:256

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","completed":20,"skipped":135,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSSS
------------------------------
• [SLOW TEST] [16.306 seconds]
[sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
test/e2e/kubectl/portforward.go:470

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","completed":16,"skipped":129,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [16.650 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
test/e2e/storage/testsuites/subpath.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","completed":18,"skipped":148,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [27.888 seconds]
[sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]
test/e2e/kubectl/kubectl.go:1736

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","completed":18,"skipped":124,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [12.623 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
test/e2e/storage/testsuites/subpath.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","completed":19,"skipped":150,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
... skipping 18 lines ...
------------------------------
• [SLOW TEST] [16.225 seconds]
[sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
test/e2e/common/storage/empty_dir.go:63

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","completed":17,"skipped":145,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [167.118 seconds]
[sig-network] Services should implement service.kubernetes.io/headless
test/e2e/network/service.go:2207

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","completed":15,"skipped":125,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [11.079 seconds]
[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
test/e2e/apimachinery/resource_quota.go:382

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","completed":19,"skipped":134,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [50.146 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
test/e2e/storage/persistent_volumes-local.go:240

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","completed":16,"skipped":185,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [51.674 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:447

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","completed":24,"skipped":185,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [10.260 seconds]
[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
test/e2e/network/proxy.go:380

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]","completed":18,"skipped":147,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [60.843 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
test/e2e/storage/testsuites/volumes.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","completed":17,"skipped":169,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 342 lines ...
------------------------------
• [SLOW TEST] [251.854 seconds]
[sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
test/e2e/common/node/container_probe.go:211

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","completed":18,"skipped":118,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [13.047 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
test/e2e/apimachinery/crd_publish_openapi.go:276

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","completed":17,"skipped":185,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSS
------------------------------
S [SKIPPED] [0.003 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [14.354 seconds]
[sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/empty_dir.go:146

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":25,"skipped":190,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [8.892 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
test/e2e/storage/persistent_volumes-local.go:240

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","completed":18,"skipped":200,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 72 lines ...
------------------------------
• [SLOW TEST] [27.114 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
test/e2e/storage/testsuites/volumes.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","completed":16,"skipped":125,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [27.511 seconds]
[sig-apps] ReplicaSet Replace and Patch tests [Conformance]
test/e2e/apps/replica_set.go:154

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","completed":20,"skipped":139,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [16.399 seconds]
[sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
test/e2e/apps/disruption.go:289

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","completed":18,"skipped":209,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [53.338 seconds]
[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
test/e2e/network/service.go:1244

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","completed":21,"skipped":146,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [12.116 seconds]
[sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/node/security_context.go:271

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","completed":26,"skipped":192,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSS
------------------------------
S [SKIPPED] [0.032 seconds]
[sig-apps] ReplicationController
... skipping 54 lines ...
------------------------------
• [SLOW TEST] [8.440 seconds]
[sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
test/e2e/common/storage/secrets_volume.go:98

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","completed":19,"skipped":215,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [74.262 seconds]
[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
test/e2e/apps/cronjob.go:160

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","completed":26,"skipped":202,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
... skipping 18 lines ...
------------------------------
• [SLOW TEST] [25.933 seconds]
[sig-network] Services should be able to create a functioning NodePort service [Conformance]
test/e2e/network/service.go:1181

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","completed":19,"skipped":156,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [12.202 seconds]
[sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/empty_dir.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":19,"skipped":213,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [12.425 seconds]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
test/e2e/common/storage/configmap_volume.go:108

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","completed":22,"skipped":148,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 93 lines ...
------------------------------
• [0.295 seconds]
[sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
test/e2e/common/node/runtimeclass.go:104

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]","completed":23,"skipped":163,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 214 lines ...
------------------------------
• [SLOW TEST] [68.243 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
test/e2e/storage/testsuites/volumes.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","completed":19,"skipped":113,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [52.237 seconds]
[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
test/e2e/storage/persistent_volumes-local.go:257

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","completed":20,"skipped":151,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 68 lines ...
------------------------------
• [0.271 seconds]
[sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
test/e2e/instrumentation/monitoring/metrics_grabber.go:76

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","completed":21,"skipped":164,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [4.182 seconds]
[sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/empty_dir.go:186

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":22,"skipped":181,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [0.167 seconds]
[sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota
test/e2e/apimachinery/resource_quota.go:922

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota","completed":23,"skipped":181,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [14.322 seconds]
[sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/empty_dir.go:216

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","completed":24,"skipped":190,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [14.669 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
test/e2e/apimachinery/crd_publish_openapi.go:357

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","completed":20,"skipped":121,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 133 lines ...
------------------------------
• [SLOW TEST] [39.659 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","completed":17,"skipped":126,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [11.285 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
test/e2e/apimachinery/crd_publish_openapi.go:69

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","completed":21,"skipped":141,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [40.936 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:232

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","completed":21,"skipped":146,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [18.109 seconds]
[sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
test/e2e/apps/job.go:271

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","completed":24,"skipped":181,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-file]
... skipping 43 lines ...
------------------------------
• [SLOW TEST] [6.160 seconds]
[sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
test/e2e/common/node/kubelet.go:51

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","completed":22,"skipped":143,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
• [SLOW TEST] [14.556 seconds]
[sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
test/e2e/kubectl/kubectl.go:501

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","completed":25,"skipped":184,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 125 lines ...
------------------------------
• [SLOW TEST] [22.303 seconds]
[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
test/e2e/network/service.go:2150

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","completed":18,"skipped":129,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [13.893 seconds]
[sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
test/e2e/kubectl/kubectl.go:1052

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema","completed":23,"skipped":150,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
... skipping 93 lines ...
------------------------------
• [SLOW TEST] [312.262 seconds]
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
test/e2e/apps/statefulset.go:292

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs","completed":14,"skipped":103,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [10.282 seconds]
[sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
test/e2e/common/node/pods.go:225

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","completed":19,"skipped":130,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [0.232 seconds]
[sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
test/e2e/common/node/configmap.go:137

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","completed":20,"skipped":130,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
• [SLOW TEST] [142.948 seconds]
[sig-network] Services should be able to up and down services
test/e2e/network/service.go:1045

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should be able to up and down services","completed":17,"skipped":140,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [0.473 seconds]
[sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
test/e2e/apimachinery/apply.go:273

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","completed":21,"skipped":132,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [10.541 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","completed":15,"skipped":105,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [0.200 seconds]
[sig-network] Proxy version v1 should proxy logs on node using proxy subresource 
test/e2e/network/proxy.go:92

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","completed":16,"skipped":106,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 168 lines ...
------------------------------
• [SLOW TEST] [74.342 seconds]
[sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
test/e2e/apps/disruption.go:194

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods","completed":20,"skipped":156,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [10.204 seconds]
[sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
test/e2e/apimachinery/watch.go:257

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","completed":17,"skipped":124,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
• [SLOW TEST] [29.878 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
test/e2e/storage/testsuites/subpath.go:196

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","completed":24,"skipped":163,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [SLOW TEST] [10.857 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
test/e2e/storage/testsuites/subpath.go:382

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","completed":21,"skipped":165,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 45 lines ...
------------------------------
• [SLOW TEST] [8.312 seconds]
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
test/e2e/common/node/lifecycle_hook.go:152

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","completed":18,"skipped":127,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [SLOW TEST] [8.067 seconds]
[sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
test/e2e/common/node/init_container.go:176

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","completed":25,"skipped":163,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [9.828 seconds]
[sig-network] Services should allow pods to hairpin back to themselves through services
test/e2e/network/service.go:1016

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","completed":22,"skipped":178,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [69.986 seconds]
[sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
test/e2e/network/networking.go:474

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork","completed":22,"skipped":146,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [20.404 seconds]
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
test/e2e/apps/statefulset.go:975

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","completed":26,"skipped":174,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
• [0.168 seconds]
[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
test/e2e/network/netpol/network_legacy.go:2201

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","completed":27,"skipped":175,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 70 lines ...
------------------------------
• [4.612 seconds]
[sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
test/e2e/common/storage/downwardapi_volume.go:129

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","completed":28,"skipped":201,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [8.860 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
test/e2e/storage/testsuites/volumes.go:198

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","completed":29,"skipped":231,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSSSSSSSSS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [120.238 seconds]
[sig-apps] CronJob should delete successful finished jobs with limit of one successful job
test/e2e/apps/cronjob.go:280

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","completed":20,"skipped":221,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
... skipping 18 lines ...
------------------------------
• [1.540 seconds]
[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
test/e2e/kubectl/kubectl.go:929

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","completed":21,"skipped":222,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
• [4.215 seconds]
[sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
test/e2e/common/storage/empty_dir.go:136

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","completed":30,"skipped":250,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SS
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] CSI Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [255.870 seconds]
[sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
test/e2e/common/node/container_probe.go:520

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]","completed":19,"skipped":139,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
SSSS
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 20 lines ...
------------------------------
• [SLOW TEST] [358.827 seconds]
[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused
test/e2e/storage/csi_mock_volume.go:1377

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","completed":10,"skipped":78,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
... skipping 22 lines ...
------------------------------
• [SLOW TEST] [12.153 seconds]
[sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
test/e2e/kubectl/portforward.go:465

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","completed":31,"skipped":253,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S
------------------------------
S [SKIPPED] [0.001 seconds]
[sig-storage] In-tree Volumes
... skipping 47 lines ...
------------------------------
• [SLOW TEST] [40.412 seconds]
[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
test/e2e/storage/testsuites/volumes.go:161

  Begin Captured StdOut/StdErr Output >>
    {"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","completed":23,"skipped":154,"failed":0}
  << End Captured StdOut/StdErr Output
------------------------------
S [SKIPPED] [0.000 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
... skipping 113 lines ...
      test/e2e/e2e_test.go:142
  << End Captured GinkgoWriter Output

  Driver local doesn't support DynamicPV -- skipping
  In [BeforeEach] at: test/e2e/storage/framework/testsuite.go:116
------------------------------
S{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2022-07-08T07:34:33Z"}
wrapper.sh] [EARLY EXIT] Interrupted, entering handler ...
wrapper.sh] [CLEANUP] Cleaning up after Docker in Docker ...
+ signal_handler
+ [ -n 81989 ]
+ kill -TERM 81989
+ cleanup
+ [ false = true ]
+ [ true = true ]
+ kind export logs /logs/artifacts
Exporting logs for cluster "kind" to:
/logs/artifacts

------------------------------
• [FAILED] [67.242 seconds]
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [AfterEach]
    test/e2e/framework/framework.go:187
      should create read-only inline ephemeral volume
      test/e2e/storage/testsuites/ephemeral.go:175

  Begin Captured StdOut/StdErr Output >>
    {"msg":"FAILED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","completed":18,"skipped":128,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume"]}
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
      test/e2e/storage/framework/testsuite.go:51
    [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
... skipping 82 lines ...
    Jul  8 07:34:24.992: INFO: Wait up to 5m0s for pod PV pvc-a7db5c5e-3f07-4f4a-a82a-1fb8b9ab5ae3 to be fully deleted
    Jul  8 07:34:24.992: INFO: Waiting up to 5m0s for PersistentVolume pvc-a7db5c5e-3f07-4f4a-a82a-1fb8b9ab5ae3 to get deleted
    Jul  8 07:34:24.995: INFO: PersistentVolume pvc-a7db5c5e-3f07-4f4a-a82a-1fb8b9ab5ae3 was removed
    STEP: Deleting sc 07/08/22 07:34:24.998
    STEP: deleting the test namespace: ephemeral-7008 07/08/22 07:34:25.001
    STEP: Waiting for namespaces [ephemeral-7008] to vanish 07/08/22 07:34:25.038
    Jul  8 07:34:35.044: INFO: error deleting namespace ephemeral-7008: Get "https://127.0.0.1:46737/api/v1/namespaces": dial tcp 127.0.0.1:46737: connect: connection refused
    STEP: uninstalling csi csi-hostpath driver 07/08/22 07:34:35.044
    Jul  8 07:34:35.044: INFO: deleting *v1.ServiceAccount: ephemeral-7008-6202/csi-attacher
    Jul  8 07:34:35.044: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-7008-6202/serviceaccounts/csi-attacher": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.044: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-7008
    Jul  8 07:34:35.044: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-attacher-runner-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.044: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-7008
    Jul  8 07:34:35.044: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-attacher-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.044: INFO: deleting *v1.Role: ephemeral-7008-6202/external-attacher-cfg-ephemeral-7008
    Jul  8 07:34:35.045: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/roles/external-attacher-cfg-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.045: INFO: deleting *v1.RoleBinding: ephemeral-7008-6202/csi-attacher-role-cfg
    Jul  8 07:34:35.045: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/rolebindings/csi-attacher-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.045: INFO: deleting *v1.ServiceAccount: ephemeral-7008-6202/csi-provisioner
    Jul  8 07:34:35.045: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-7008-6202/serviceaccounts/csi-provisioner": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.045: INFO: deleting *v1.ClusterRole: external-provisioner-runner-ephemeral-7008
    Jul  8 07:34:35.045: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-provisioner-runner-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.045: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-7008
    Jul  8 07:34:35.045: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-provisioner-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.045: INFO: deleting *v1.Role: ephemeral-7008-6202/external-provisioner-cfg-ephemeral-7008
    Jul  8 07:34:35.045: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/roles/external-provisioner-cfg-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.045: INFO: deleting *v1.RoleBinding: ephemeral-7008-6202/csi-provisioner-role-cfg
    Jul  8 07:34:35.046: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/rolebindings/csi-provisioner-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.046: INFO: deleting *v1.ServiceAccount: ephemeral-7008-6202/csi-snapshotter
    Jul  8 07:34:35.046: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-7008-6202/serviceaccounts/csi-snapshotter": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.046: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-ephemeral-7008
    Jul  8 07:34:35.046: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-snapshotter-runner-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.046: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-7008
    Jul  8 07:34:35.046: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshotter-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.046: INFO: deleting *v1.Role: ephemeral-7008-6202/external-snapshotter-leaderelection-ephemeral-7008
    Jul  8 07:34:35.046: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/roles/external-snapshotter-leaderelection-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.046: INFO: deleting *v1.RoleBinding: ephemeral-7008-6202/external-snapshotter-leaderelection
    Jul  8 07:34:35.046: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/rolebindings/external-snapshotter-leaderelection": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.046: INFO: deleting *v1.ServiceAccount: ephemeral-7008-6202/csi-external-health-monitor-controller
    Jul  8 07:34:35.046: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-7008-6202/serviceaccounts/csi-external-health-monitor-controller": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.046: INFO: deleting *v1.ClusterRole: external-health-monitor-controller-runner-ephemeral-7008
    Jul  8 07:34:35.047: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-health-monitor-controller-runner-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.047: INFO: deleting *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-ephemeral-7008
    Jul  8 07:34:35.047: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-external-health-monitor-controller-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.047: INFO: deleting *v1.Role: ephemeral-7008-6202/external-health-monitor-controller-cfg-ephemeral-7008
    Jul  8 07:34:35.059: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/roles/external-health-monitor-controller-cfg-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.059: INFO: deleting *v1.RoleBinding: ephemeral-7008-6202/csi-external-health-monitor-controller-role-cfg
    Jul  8 07:34:35.059: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/rolebindings/csi-external-health-monitor-controller-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.059: INFO: deleting *v1.ServiceAccount: ephemeral-7008-6202/csi-resizer
    Jul  8 07:34:35.060: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-7008-6202/serviceaccounts/csi-resizer": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.060: INFO: deleting *v1.ClusterRole: external-resizer-runner-ephemeral-7008
    Jul  8 07:34:35.060: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-resizer-runner-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.060: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-7008
    Jul  8 07:34:35.060: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-resizer-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.060: INFO: deleting *v1.Role: ephemeral-7008-6202/external-resizer-cfg-ephemeral-7008
    Jul  8 07:34:35.060: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/roles/external-resizer-cfg-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.060: INFO: deleting *v1.RoleBinding: ephemeral-7008-6202/csi-resizer-role-cfg
    Jul  8 07:34:35.060: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/rolebindings/csi-resizer-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.060: INFO: deleting *v1.CSIDriver: csi-hostpath-ephemeral-7008
    Jul  8 07:34:35.060: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/storage.k8s.io/v1/csidrivers/csi-hostpath-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.060: INFO: deleting *v1.ServiceAccount: ephemeral-7008-6202/csi-hostpathplugin-sa
    Jul  8 07:34:35.060: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-7008-6202/serviceaccounts/csi-hostpathplugin-sa": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.060: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-ephemeral-7008
    Jul  8 07:34:35.060: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-hostpathplugin-attacher-cluster-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.060: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-ephemeral-7008
    Jul  8 07:34:35.061: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-hostpathplugin-health-monitor-controller-cluster-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.061: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-ephemeral-7008
    Jul  8 07:34:35.061: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-hostpathplugin-provisioner-cluster-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.061: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-ephemeral-7008
    Jul  8 07:34:35.061: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-hostpathplugin-resizer-cluster-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.061: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-ephemeral-7008
    Jul  8 07:34:35.061: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-hostpathplugin-snapshotter-cluster-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.061: INFO: deleting *v1.RoleBinding: ephemeral-7008-6202/csi-hostpathplugin-attacher-role
    Jul  8 07:34:35.061: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/rolebindings/csi-hostpathplugin-attacher-role": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.061: INFO: deleting *v1.RoleBinding: ephemeral-7008-6202/csi-hostpathplugin-health-monitor-controller-role
    Jul  8 07:34:35.061: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/rolebindings/csi-hostpathplugin-health-monitor-controller-role": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.061: INFO: deleting *v1.RoleBinding: ephemeral-7008-6202/csi-hostpathplugin-provisioner-role
    Jul  8 07:34:35.061: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/rolebindings/csi-hostpathplugin-provisioner-role": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.061: INFO: deleting *v1.RoleBinding: ephemeral-7008-6202/csi-hostpathplugin-resizer-role
    Jul  8 07:34:35.061: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/rolebindings/csi-hostpathplugin-resizer-role": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.061: INFO: deleting *v1.RoleBinding: ephemeral-7008-6202/csi-hostpathplugin-snapshotter-role
    Jul  8 07:34:35.062: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-7008-6202/rolebindings/csi-hostpathplugin-snapshotter-role": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.062: INFO: deleting *v1.StatefulSet: ephemeral-7008-6202/csi-hostpathplugin
    Jul  8 07:34:35.062: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/apps/v1/namespaces/ephemeral-7008-6202/statefulsets/csi-hostpathplugin": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.062: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-7008
    Jul  8 07:34:35.062: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/psp-csi-hostpath-role-ephemeral-7008": dial tcp 127.0.0.1:46737: connect: connection refused
    STEP: deleting the driver namespace: ephemeral-7008-6202 07/08/22 07:34:35.062
    Jul  8 07:34:35.062: INFO: error deleting namespace ephemeral-7008-6202: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-7008-6202": dial tcp 127.0.0.1:46737: connect: connection refused
    [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
      test/e2e/framework/framework.go:187
    Jul  8 07:34:35.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    Jul  8 07:34:35.062: FAIL: All nodes should be ready after test, Get "https://127.0.0.1:46737/api/v1/nodes": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace

    STEP: Destroying namespace "ephemeral-7008" for this suite. 07/08/22 07:34:35.063
    STEP: Collecting events from namespace "ephemeral-7008". 07/08/22 07:34:35.063
    Jul  8 07:34:35.084: INFO: Unexpected error: failed to list events in namespace "ephemeral-7008": 
        <*url.Error | 0xc003942db0>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-7008/events",
            Err: <*net.OpError | 0xc002f4f400>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc0038b8ea0>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.084: FAIL: failed to list events in namespace "ephemeral-7008": Get "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-7008/events": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003248830, {0xc0037ad9b0, 0xe})
    	test/e2e/framework/util.go:909 +0x191
    k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7b33508, 0xc003279800}, {0xc0037ad9b0, 0xe})
    	test/e2e/framework/util.go:927 +0x8d
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1()
    	test/e2e/framework/framework.go:402 +0x77d
    panic({0x6d6dac0, 0xc003531400})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc002f04930})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00176b860, 0x9c}, {0xc0029af788?, 0x721e46e?, 0xc0029af7b0?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Failf({0x72ee744?, 0xc003279800?}, {0xc0029afa78?, 0x722c850?, 0x9?})
    	test/e2e/framework/log.go:51 +0x12c
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000a63760)
    	test/e2e/framework/framework.go:483 +0x745
    [ReportAfterEach] TOP-LEVEL
... skipping 75 lines ...

  Driver local doesn't support GenericEphemeralVolume -- skipping
  In [BeforeEach] at: test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
• [FAILED] [9.140 seconds]
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  test/e2e/storage/in_tree_volumes.go:63
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/framework/testsuite.go:50
      [It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:447

  Begin Captured StdOut/StdErr Output >>
    {"msg":"FAILED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","completed":23,"skipped":167,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]"]}
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
      test/e2e/storage/framework/testsuite.go:51
    [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 16 lines ...
    Jul  8 07:34:30.158: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:46737 --kubeconfig=/root/.kube/kind-test-config --namespace=provisioning-4355 exec pod-subpath-test-inlinevolume-2cs5 --container test-container-volume-inlinevolume-2cs5 -- /bin/sh -c rm -r /test-volume/provisioning-4355'
    Jul  8 07:34:31.095: INFO: stderr: ""
    Jul  8 07:34:31.095: INFO: stdout: ""
    STEP: Deleting pod pod-subpath-test-inlinevolume-2cs5 07/08/22 07:34:31.095
    Jul  8 07:34:31.095: INFO: Deleting pod "pod-subpath-test-inlinevolume-2cs5" in namespace "provisioning-4355"
    Jul  8 07:34:31.117: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-2cs5" to be fully deleted
    Jul  8 07:34:35.139: INFO: Encountered non-retryable error while getting pod provisioning-4355/pod-subpath-test-inlinevolume-2cs5: Get "https://127.0.0.1:46737/api/v1/namespaces/provisioning-4355/pods/pod-subpath-test-inlinevolume-2cs5": dial tcp 127.0.0.1:46737: connect: connection refused
    STEP: Deleting pod 07/08/22 07:34:35.139
    Jul  8 07:34:35.139: INFO: Deleting pod "pod-subpath-test-inlinevolume-2cs5" in namespace "provisioning-4355"
    Jul  8 07:34:35.140: INFO: Unexpected error: while cleaning up resource: 
        <errors.aggregate | len:1, cap:1>: [
            <*errors.errorString | 0xc000538520>{
                s: "pod Delete API error: Delete \"https://127.0.0.1:46737/api/v1/namespaces/provisioning-4355/pods/pod-subpath-test-inlinevolume-2cs5\": dial tcp 127.0.0.1:46737: connect: connection refused",
            },
        ]
    Jul  8 07:34:35.140: FAIL: while cleaning up resource: pod Delete API error: Delete "https://127.0.0.1:46737/api/v1/namespaces/provisioning-4355/pods/pod-subpath-test-inlinevolume-2cs5": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func2()
    	test/e2e/storage/testsuites/subpath.go:185 +0x326
    k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func20()
    	test/e2e/storage/testsuites/subpath.go:475 +0x4f7
    [AfterEach] [Testpattern: Inline-volume (default fs)] subPath
      test/e2e/framework/framework.go:187
    STEP: Collecting events from namespace "provisioning-4355". 07/08/22 07:34:35.14
    Jul  8 07:34:35.140: INFO: Unexpected error: failed to list events in namespace "provisioning-4355": 
        <*url.Error | 0xc003a33bf0>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/api/v1/namespaces/provisioning-4355/events",
            Err: <*net.OpError | 0xc0030cb400>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc002769200>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.140: FAIL: failed to list events in namespace "provisioning-4355": Get "https://127.0.0.1:46737/api/v1/namespaces/provisioning-4355/events": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0035dd770, {0xc0034c90f8, 0x11})
    	test/e2e/framework/util.go:909 +0x191
    k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7b33508, 0xc0001adb00}, {0xc0034c90f8, 0x11})
    	test/e2e/framework/util.go:927 +0x8d
    k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000668dc0, 0x3?)
    	test/e2e/framework/framework.go:181 +0x8b
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000668dc0)
    	test/e2e/framework/framework.go:435 +0x1e2
    STEP: Destroying namespace "provisioning-4355" for this suite. 07/08/22 07:34:35.141
    Jul  8 07:34:35.141: FAIL: Couldn't delete ns: "provisioning-4355": Delete "https://127.0.0.1:46737/api/v1/namespaces/provisioning-4355": dial tcp 127.0.0.1:46737: connect: connection refused (&url.Error{Op:"Delete", URL:"https://127.0.0.1:46737/api/v1/namespaces/provisioning-4355", Err:(*net.OpError)(0xc0030cb590)})

    Full Stack Trace
    panic({0x6d6dac0, 0xc002f4e240})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc000659b90})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000e06ea0, 0xcc}, {0xc0035dd228?, 0x721e46e?, 0xc0035dd248?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Fail({0xc002c67c80, 0xb7}, {0xc0035dd2c0?, 0xc0013f6a80?, 0xc0035dd2e8?})
    	test/e2e/framework/log.go:63 +0x145
    k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7ad92e0, 0xc003a33bf0}, {0xc002769240?, 0x0?, 0x0?})
    	test/e2e/framework/expect.go:76 +0x267
    k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
    	test/e2e/framework/expect.go:43
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0035dd770, {0xc0034c90f8, 0x11})
... skipping 5 lines ...
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000668dc0)
    	test/e2e/framework/framework.go:435 +0x1e2
    [ReportAfterEach] TOP-LEVEL
      test/e2e/e2e_test.go:142
  << End Captured GinkgoWriter Output

  Jul  8 07:34:35.140: while cleaning up resource: pod Delete API error: Delete "https://127.0.0.1:46737/api/v1/namespaces/provisioning-4355/pods/pod-subpath-test-inlinevolume-2cs5": dial tcp 127.0.0.1:46737: connect: connection refused
  In [It] at: test/e2e/storage/testsuites/subpath.go:185
------------------------------
SSSSS
------------------------------
• [FAILED] [89.827 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/common/node/container_probe.go:148

  Begin Captured StdOut/StdErr Output >>
    {"msg":"FAILED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","completed":21,"skipped":139,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]}
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [sig-node] Probing container
      test/e2e/framework/framework.go:186
    STEP: Creating a kubernetes client 07/08/22 07:33:05.331
... skipping 14 lines ...
    Jul  8 07:33:13.477: INFO: Pod "busybox-add8b797-c58e-440b-b940-bbc7b12efc58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024459516s
    Jul  8 07:33:15.477: INFO: Pod "busybox-add8b797-c58e-440b-b940-bbc7b12efc58": Phase="Running", Reason="", readiness=true. Elapsed: 10.02365642s
    Jul  8 07:33:15.477: INFO: Pod "busybox-add8b797-c58e-440b-b940-bbc7b12efc58" satisfied condition "not pending"
    Jul  8 07:33:15.477: INFO: Started pod busybox-add8b797-c58e-440b-b940-bbc7b12efc58 in namespace container-probe-6354
    STEP: checking the pod's current state and verifying that restartCount is present 07/08/22 07:33:15.477
    Jul  8 07:33:15.479: INFO: Initial restart count of pod busybox-add8b797-c58e-440b-b940-bbc7b12efc58 is 0
    Jul  8 07:34:35.155: INFO: Unexpected error: getting pod : 
        <*rest.wrapPreviousError | 0xc002af6300>: {
            currentErr: <*url.Error | 0xc001e78cf0>{
                Op: "Get",
                URL: "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6354/pods/busybox-add8b797-c58e-440b-b940-bbc7b12efc58",
                Err: <*net.OpError | 0xc0029a05a0>{
                    Op: "dial",
                    Net: "tcp",
                    Source: nil,
... skipping 7 lines ...
                        Err: <syscall.Errno>0x6f,
                    },
                },
            },
            previousError: <*errors.errorString | 0xc0000ca130>{s: "unexpected EOF"},
        }
    Jul  8 07:34:35.155: FAIL: getting pod : Get "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6354/pods/busybox-add8b797-c58e-440b-b940-bbc7b12efc58": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: unexpected EOF

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000e0e2c0, 0xc000f8c400, 0x0, 0x37e11d6000?)
    	test/e2e/common/node/container_probe.go:910 +0x96b
    k8s.io/kubernetes/test/e2e/common/node.glob..func2.5()
    	test/e2e/common/node/container_probe.go:157 +0x165
    STEP: deleting the pod 07/08/22 07:34:35.156
    [AfterEach] [sig-node] Probing container
      test/e2e/framework/framework.go:187
    STEP: Collecting events from namespace "container-probe-6354". 07/08/22 07:34:35.156
    Jul  8 07:34:35.156: INFO: Unexpected error: failed to list events in namespace "container-probe-6354": 
        <*url.Error | 0xc001e79200>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6354/events",
            Err: <*net.OpError | 0xc0029a08c0>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc002af67a0>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.156: FAIL: failed to list events in namespace "container-probe-6354": Get "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6354/events": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003397770, {0xc003540678, 0x14})
    	test/e2e/framework/util.go:909 +0x191
    k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7b33508, 0xc002888780}, {0xc003540678, 0x14})
    	test/e2e/framework/util.go:927 +0x8d
    k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000e0e2c0, 0x1?)
    	test/e2e/framework/framework.go:181 +0x8b
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e0e2c0)
    	test/e2e/framework/framework.go:435 +0x1e2
    STEP: Destroying namespace "container-probe-6354" for this suite. 07/08/22 07:34:35.157
    Jul  8 07:34:35.157: FAIL: Couldn't delete ns: "container-probe-6354": Delete "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6354": dial tcp 127.0.0.1:46737: connect: connection refused (&url.Error{Op:"Delete", URL:"https://127.0.0.1:46737/api/v1/namespaces/container-probe-6354", Err:(*net.OpError)(0xc0029a0a50)})

    Full Stack Trace
    panic({0x6d6dac0, 0xc0021dcb80})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc000a1f110})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000088540, 0xd2}, {0xc003397228?, 0x721e46e?, 0xc003397248?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Fail({0xc002aba240, 0xbd}, {0xc0033972c0?, 0xc000086d80?, 0xc0033972e8?})
    	test/e2e/framework/log.go:63 +0x145
    k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7ad92e0, 0xc001e79200}, {0xc002af67e0?, 0x0?, 0x0?})
    	test/e2e/framework/expect.go:76 +0x267
    k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
    	test/e2e/framework/expect.go:43
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003397770, {0xc003540678, 0x14})
... skipping 5 lines ...
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e0e2c0)
    	test/e2e/framework/framework.go:435 +0x1e2
    [ReportAfterEach] TOP-LEVEL
      test/e2e/e2e_test.go:142
  << End Captured GinkgoWriter Output

  Jul  8 07:34:35.155: getting pod : Get "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6354/pods/busybox-add8b797-c58e-440b-b940-bbc7b12efc58": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: unexpected EOF
  In [It] at: test/e2e/common/node/container_probe.go:910
------------------------------
• [FAILED] [413.707 seconds]
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand [AfterEach]
    test/e2e/framework/framework.go:187
      Verify if offline PVC expansion works
      test/e2e/storage/testsuites/volume_expand.go:176

  Begin Captured StdOut/StdErr Output >>
    {"msg":"FAILED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","completed":12,"skipped":124,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works"]}
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
      test/e2e/storage/framework/testsuite.go:51
    [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 176 lines ...
    Jul  8 07:29:46.030: INFO: deleting *v1.RoleBinding: volume-expand-502-393/csi-hostpathplugin-resizer-role
    Jul  8 07:29:46.044: INFO: deleting *v1.RoleBinding: volume-expand-502-393/csi-hostpathplugin-snapshotter-role
    Jul  8 07:29:46.057: INFO: deleting *v1.StatefulSet: volume-expand-502-393/csi-hostpathplugin
    Jul  8 07:29:46.071: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volume-expand-502
    STEP: deleting the driver namespace: volume-expand-502-393 07/08/22 07:29:46.087
    STEP: Waiting for namespaces [volume-expand-502-393] to vanish 07/08/22 07:29:46.114
    Jul  8 07:34:35.158: INFO: error deleting namespace volume-expand-502-393: Get "https://127.0.0.1:46737/api/v1/namespaces": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: unexpected EOF
    [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
      test/e2e/framework/framework.go:187
    Jul  8 07:34:35.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    Jul  8 07:34:35.158: FAIL: All nodes should be ready after test, Get "https://127.0.0.1:46737/api/v1/nodes": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace

    STEP: Destroying namespace "volume-expand-502-393" for this suite. 07/08/22 07:34:35.158
    STEP: Collecting events from namespace "volume-expand-502-393". 07/08/22 07:34:35.158
    Jul  8 07:34:35.159: INFO: Unexpected error: failed to list events in namespace "volume-expand-502-393": 
        <*url.Error | 0xc00329ff80>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/api/v1/namespaces/volume-expand-502-393/events",
            Err: <*net.OpError | 0xc001ffc640>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc0039446c0>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.159: FAIL: failed to list events in namespace "volume-expand-502-393": Get "https://127.0.0.1:46737/api/v1/namespaces/volume-expand-502-393/events": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003df2830, {0xc003adaa20, 0x15})
    	test/e2e/framework/util.go:909 +0x191
    k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7b33508, 0xc002831980}, {0xc003adaa20, 0x15})
    	test/e2e/framework/util.go:927 +0x8d
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1()
    	test/e2e/framework/framework.go:402 +0x77d
    panic({0x6d6dac0, 0xc003aa41c0})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc000d06700})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0019b20a0, 0x9c}, {0xc003087788?, 0x721e46e?, 0xc0030877b0?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Failf({0x72ee744?, 0xc002831980?}, {0xc003087a78?, 0x723cf21?, 0xd?})
    	test/e2e/framework/log.go:51 +0x12c
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000b08000)
    	test/e2e/framework/framework.go:483 +0x745
    [ReportAfterEach] TOP-LEVEL
... skipping 25 lines ...
      test/e2e/e2e_test.go:142
  << End Captured GinkgoWriter Output

  Driver local doesn't support GenericEphemeralVolume -- skipping
  In [BeforeEach] at: test/e2e/storage/framework/testsuite.go:116
------------------------------
• [FAILED] [11.275 seconds]
[sig-apps] StatefulSet
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    [It] should adopt matching orphans and release non-matching pods
    test/e2e/apps/statefulset.go:171

  Begin Captured StdOut/StdErr Output >>
    E0708 07:34:35.149743   82214 runtime.go:79] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jul  8 07:34:35.149: Get \"https://127.0.0.1:46737/api/v1/namespaces/statefulset-7804/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar\": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: unexpected EOF", Filename:"test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7b33508, 0xc006a1d680}, 0xc001656a00)\n\ttest/e2e/framework/statefulset/rest.go:68 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2625a91, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7b041c0?, 0xc0001b2000?}, 0x25131e5?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7b041c0, 0xc0001b2000}, 0xc006a9e078, 0x2e176ca?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 +0x116\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7b041c0, 0xc0001b2000}, 0x50?, 0x2e16265?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7b041c0, 0xc0001b2000}, 0xc0aa155ff8a06eee?, 0xc0062bdda0?, 0x2513547?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x7220090?, 0x4?, 0x721e46e?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7b33508?, 0xc006a1d680}, 0x1, 0x0, 0xc001656a00)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.Saturate({0x7b33508, 0xc006a1d680}, 0xc001656a00)\n\ttest/e2e/framework/statefulset/wait.go:179 +0xa7\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.4()\n\ttest/e2e/apps/statefulset.go:185 +0x239"} (
    Your test failed.
    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    	defer GinkgoRecover()
    at the top of the goroutine that caused this panic.
... skipping 2 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6d6dac0?, 0xc006941140})
    	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc006941140?})
    	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75
    panic({0x6d6dac0, 0xc006941140})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc00693b960})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc004263a40, 0xe6}, {0xc0023ef458?, 0xc0023ef468?, 0x0?})
    	vendor/github.com/onsi/ginkgo/v2/core_dsl.go:335 +0x225
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc004263a40, 0xe6}, {0xc0023ef538?, 0x721e46e?, 0xc0023ef558?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Fail({0xc003a13ce0, 0xd1}, {0xc0023ef5d0?, 0xc003a13ce0?, 0xc0023ef5f8?})
    	test/e2e/framework/log.go:63 +0x145
    k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7ad8260, 0xc006964de0}, {0x0?, 0xc0034ff4c0?, 0x10?})
    	test/e2e/framework/expect.go:76 +0x267
    k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
    	test/e2e/framework/expect.go:43
    k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7b33508, 0xc006a1d680}, 0xc001656a00)
... skipping 19 lines ...
    k8s.io/kubernetes/test/e2e/apps.glob..func10.2.4()
    	test/e2e/apps/statefulset.go:185 +0x239
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
    	vendor/github.com/onsi/ginkgo/v2/internal/suite.go:596 +0x8d
    created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
    	vendor/github.com/onsi/ginkgo/v2/internal/suite.go:584 +0x5f5
    {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","completed":31,"skipped":262,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods"]}
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [sig-apps] StatefulSet
      test/e2e/framework/framework.go:186
    STEP: Creating a kubernetes client 07/08/22 07:34:23.893
... skipping 10 lines ...
      test/e2e/apps/statefulset.go:171
    STEP: Creating statefulset ss in namespace statefulset-7804 07/08/22 07:34:23.918
    Jul  8 07:34:23.946: INFO: Default storage class: "standard"
    STEP: Saturating stateful set ss 07/08/22 07:34:23.95
    Jul  8 07:34:23.950: INFO: Waiting for stateful pod at index 0 to enter Running
    Jul  8 07:34:23.965: INFO: Found 0 stateful pods, waiting for 1
    Jul  8 07:34:35.148: INFO: Unexpected error: 
        <*rest.wrapPreviousError | 0xc006964de0>: {
            currentErr: <*url.Error | 0xc003a852c0>{
                Op: "Get",
                URL: "https://127.0.0.1:46737/api/v1/namespaces/statefulset-7804/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar",
                Err: <*net.OpError | 0xc00692d540>{
                    Op: "dial",
                    Net: "tcp",
                    Source: nil,
... skipping 7 lines ...
                        Err: <syscall.Errno>0x6f,
                    },
                },
            },
            previousError: <*errors.errorString | 0xc000192100>{s: "unexpected EOF"},
        }
    Jul  8 07:34:35.149: FAIL: Get "https://127.0.0.1:46737/api/v1/namespaces/statefulset-7804/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: unexpected EOF

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7b33508, 0xc006a1d680}, 0xc001656a00)
    	test/e2e/framework/statefulset/rest.go:68 +0x153
    k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()
    	test/e2e/framework/statefulset/wait.go:37 +0x4a
... skipping 12 lines ...
    k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7b33508?, 0xc006a1d680}, 0x1, 0x0, 0xc001656a00)
    	test/e2e/framework/statefulset/wait.go:35 +0xbd
    k8s.io/kubernetes/test/e2e/framework/statefulset.Saturate({0x7b33508, 0xc006a1d680}, 0xc001656a00)
    	test/e2e/framework/statefulset/wait.go:179 +0xa7
    k8s.io/kubernetes/test/e2e/apps.glob..func10.2.4()
    	test/e2e/apps/statefulset.go:185 +0x239
    E0708 07:34:35.149743   82214 runtime.go:79] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jul  8 07:34:35.149: Get \"https://127.0.0.1:46737/api/v1/namespaces/statefulset-7804/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar\": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: unexpected EOF", Filename:"test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7b33508, 0xc006a1d680}, 0xc001656a00)\n\ttest/e2e/framework/statefulset/rest.go:68 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2625a91, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7b041c0?, 0xc0001b2000?}, 0x25131e5?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7b041c0, 0xc0001b2000}, 0xc006a9e078, 0x2e176ca?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 +0x116\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7b041c0, 0xc0001b2000}, 0x50?, 0x2e16265?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7b041c0, 0xc0001b2000}, 0xc0aa155ff8a06eee?, 0xc0062bdda0?, 0x2513547?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x7220090?, 0x4?, 0x721e46e?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7b33508?, 0xc006a1d680}, 0x1, 0x0, 0xc001656a00)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.Saturate({0x7b33508, 0xc006a1d680}, 0xc001656a00)\n\ttest/e2e/framework/statefulset/wait.go:179 +0xa7\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.4()\n\ttest/e2e/apps/statefulset.go:185 +0x239"} (
    Your test failed.
    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    	defer GinkgoRecover()
    at the top of the goroutine that caused this panic.
... skipping 2 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6d6dac0?, 0xc006941140})
    	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc006941140?})
    	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75
    panic({0x6d6dac0, 0xc006941140})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc00693b960})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc004263a40, 0xe6}, {0xc0023ef458?, 0xc0023ef468?, 0x0?})
    	vendor/github.com/onsi/ginkgo/v2/core_dsl.go:335 +0x225
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc004263a40, 0xe6}, {0xc0023ef538?, 0x721e46e?, 0xc0023ef558?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Fail({0xc003a13ce0, 0xd1}, {0xc0023ef5d0?, 0xc003a13ce0?, 0xc0023ef5f8?})
    	test/e2e/framework/log.go:63 +0x145
    k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7ad8260, 0xc006964de0}, {0x0?, 0xc0034ff4c0?, 0x10?})
    	test/e2e/framework/expect.go:76 +0x267
    k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
    	test/e2e/framework/expect.go:43
    k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7b33508, 0xc006a1d680}, 0xc001656a00)
... skipping 22 lines ...
    	vendor/github.com/onsi/ginkgo/v2/internal/suite.go:596 +0x8d
    created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
    	vendor/github.com/onsi/ginkgo/v2/internal/suite.go:584 +0x5f5
    [AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
      test/e2e/apps/statefulset.go:122
    Jul  8 07:34:35.150: INFO: Deleting all statefulset in ns statefulset-7804
    Jul  8 07:34:35.150: INFO: Unexpected error: 
        <*url.Error | 0xc003a859e0>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/apis/apps/v1/namespaces/statefulset-7804/statefulsets",
            Err: <*net.OpError | 0xc00692d720>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc006965300>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.150: FAIL: Get "https://127.0.0.1:46737/apis/apps/v1/namespaces/statefulset-7804/statefulsets": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7b33508, 0xc006a1d680}, {0xc0034fe890, 0x10})
    	test/e2e/framework/statefulset/rest.go:75 +0x133
    k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2()
    	test/e2e/apps/statefulset.go:127 +0x172
    [AfterEach] [sig-apps] StatefulSet
      test/e2e/framework/framework.go:187
    STEP: Collecting events from namespace "statefulset-7804". 07/08/22 07:34:35.151
    Jul  8 07:34:35.166: INFO: Unexpected error: failed to list events in namespace "statefulset-7804": 
        <*url.Error | 0xc003a85e30>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/api/v1/namespaces/statefulset-7804/events",
            Err: <*net.OpError | 0xc00692d900>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc006965700>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.166: FAIL: failed to list events in namespace "statefulset-7804": Get "https://127.0.0.1:46737/api/v1/namespaces/statefulset-7804/events": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0023f3770, {0xc0034fe890, 0x10})
    	test/e2e/framework/util.go:909 +0x191
    k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7b33508, 0xc006a1d680}, {0xc0034fe890, 0x10})
    	test/e2e/framework/util.go:927 +0x8d
    k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000b3e9a0, 0x2?)
    	test/e2e/framework/framework.go:181 +0x8b
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000b3e9a0)
    	test/e2e/framework/framework.go:435 +0x1e2
    STEP: Destroying namespace "statefulset-7804" for this suite. 07/08/22 07:34:35.166
    Jul  8 07:34:35.166: FAIL: Couldn't delete ns: "statefulset-7804": Delete "https://127.0.0.1:46737/api/v1/namespaces/statefulset-7804": dial tcp 127.0.0.1:46737: connect: connection refused (&url.Error{Op:"Delete", URL:"https://127.0.0.1:46737/api/v1/namespaces/statefulset-7804", Err:(*net.OpError)(0xc00692dae0)})

    Full Stack Trace
    panic({0x6d6dac0, 0xc0069418c0})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc006bd9ce0})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003499ee0, 0xca}, {0xc0023f3228?, 0x721e46e?, 0xc0023f3248?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Fail({0xc0062478c0, 0xb5}, {0xc0023f32c0?, 0xc006279e00?, 0xc0023f32e8?})
    	test/e2e/framework/log.go:63 +0x145
    k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7ad92e0, 0xc003a85e30}, {0xc006965740?, 0x0?, 0x0?})
    	test/e2e/framework/expect.go:76 +0x267
    k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
    	test/e2e/framework/expect.go:43
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0023f3770, {0xc0034fe890, 0x10})
... skipping 5 lines ...
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000b3e9a0)
    	test/e2e/framework/framework.go:435 +0x1e2
    [ReportAfterEach] TOP-LEVEL
      test/e2e/e2e_test.go:142
  << End Captured GinkgoWriter Output

  Jul  8 07:34:35.149: Get "https://127.0.0.1:46737/api/v1/namespaces/statefulset-7804/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: unexpected EOF
  In [It] at: test/e2e/framework/statefulset/rest.go:68
------------------------------
• [FAILED] [290.692 seconds]
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral [AfterEach]
    test/e2e/framework/framework.go:187
      should support expansion of pvcs created for ephemeral pvcs
      test/e2e/storage/testsuites/ephemeral.go:216

  Begin Captured StdOut/StdErr Output >>
    {"msg":"FAILED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs","completed":21,"skipped":163,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs"]}
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
      test/e2e/storage/framework/testsuite.go:51
    [BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
... skipping 85 lines ...
    Jul  8 07:32:13.762: INFO: Wait up to 5m0s for pod PV pvc-109184f0-0ecd-4fa1-8128-7d5e7dbb3b8d to be fully deleted
    Jul  8 07:32:13.762: INFO: Waiting up to 5m0s for PersistentVolume pvc-109184f0-0ecd-4fa1-8128-7d5e7dbb3b8d to get deleted
    Jul  8 07:32:13.764: INFO: PersistentVolume pvc-109184f0-0ecd-4fa1-8128-7d5e7dbb3b8d was removed
    STEP: Deleting sc 07/08/22 07:32:13.766
    STEP: deleting the test namespace: ephemeral-877 07/08/22 07:32:13.772
    STEP: Waiting for namespaces [ephemeral-877] to vanish 07/08/22 07:32:13.781
    Jul  8 07:34:35.156: INFO: error deleting namespace ephemeral-877: Get "https://127.0.0.1:46737/api/v1/namespaces": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: unexpected EOF
    STEP: uninstalling csi csi-hostpath driver 07/08/22 07:34:35.156
    Jul  8 07:34:35.156: INFO: deleting *v1.ServiceAccount: ephemeral-877-6413/csi-attacher
    Jul  8 07:34:35.156: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-877-6413/serviceaccounts/csi-attacher": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.156: INFO: deleting *v1.ClusterRole: external-attacher-runner-ephemeral-877
    Jul  8 07:34:35.156: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-attacher-runner-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.156: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-ephemeral-877
    Jul  8 07:34:35.156: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-attacher-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.156: INFO: deleting *v1.Role: ephemeral-877-6413/external-attacher-cfg-ephemeral-877
    Jul  8 07:34:35.156: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/roles/external-attacher-cfg-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.157: INFO: deleting *v1.RoleBinding: ephemeral-877-6413/csi-attacher-role-cfg
    Jul  8 07:34:35.157: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/rolebindings/csi-attacher-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.157: INFO: deleting *v1.ServiceAccount: ephemeral-877-6413/csi-provisioner
    Jul  8 07:34:35.157: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-877-6413/serviceaccounts/csi-provisioner": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.157: INFO: deleting *v1.ClusterRole: external-provisioner-runner-ephemeral-877
    Jul  8 07:34:35.157: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-provisioner-runner-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.157: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-ephemeral-877
    Jul  8 07:34:35.157: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-provisioner-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.157: INFO: deleting *v1.Role: ephemeral-877-6413/external-provisioner-cfg-ephemeral-877
    Jul  8 07:34:35.157: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/roles/external-provisioner-cfg-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.157: INFO: deleting *v1.RoleBinding: ephemeral-877-6413/csi-provisioner-role-cfg
    Jul  8 07:34:35.157: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/rolebindings/csi-provisioner-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.157: INFO: deleting *v1.ServiceAccount: ephemeral-877-6413/csi-snapshotter
    Jul  8 07:34:35.157: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-877-6413/serviceaccounts/csi-snapshotter": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.157: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-ephemeral-877
    Jul  8 07:34:35.157: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-snapshotter-runner-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.157: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-ephemeral-877
    Jul  8 07:34:35.158: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshotter-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.158: INFO: deleting *v1.Role: ephemeral-877-6413/external-snapshotter-leaderelection-ephemeral-877
    Jul  8 07:34:35.158: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/roles/external-snapshotter-leaderelection-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.158: INFO: deleting *v1.RoleBinding: ephemeral-877-6413/external-snapshotter-leaderelection
    Jul  8 07:34:35.158: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/rolebindings/external-snapshotter-leaderelection": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.158: INFO: deleting *v1.ServiceAccount: ephemeral-877-6413/csi-external-health-monitor-controller
    Jul  8 07:34:35.158: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-877-6413/serviceaccounts/csi-external-health-monitor-controller": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.158: INFO: deleting *v1.ClusterRole: external-health-monitor-controller-runner-ephemeral-877
    Jul  8 07:34:35.158: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-health-monitor-controller-runner-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.158: INFO: deleting *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-ephemeral-877
    Jul  8 07:34:35.158: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-external-health-monitor-controller-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.158: INFO: deleting *v1.Role: ephemeral-877-6413/external-health-monitor-controller-cfg-ephemeral-877
    Jul  8 07:34:35.158: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/roles/external-health-monitor-controller-cfg-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.158: INFO: deleting *v1.RoleBinding: ephemeral-877-6413/csi-external-health-monitor-controller-role-cfg
    Jul  8 07:34:35.158: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/rolebindings/csi-external-health-monitor-controller-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.159: INFO: deleting *v1.ServiceAccount: ephemeral-877-6413/csi-resizer
    Jul  8 07:34:35.159: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-877-6413/serviceaccounts/csi-resizer": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.159: INFO: deleting *v1.ClusterRole: external-resizer-runner-ephemeral-877
    Jul  8 07:34:35.159: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-resizer-runner-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.159: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-ephemeral-877
    Jul  8 07:34:35.159: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-resizer-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.159: INFO: deleting *v1.Role: ephemeral-877-6413/external-resizer-cfg-ephemeral-877
    Jul  8 07:34:35.159: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/roles/external-resizer-cfg-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.159: INFO: deleting *v1.RoleBinding: ephemeral-877-6413/csi-resizer-role-cfg
    Jul  8 07:34:35.159: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/rolebindings/csi-resizer-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.159: INFO: deleting *v1.CSIDriver: csi-hostpath-ephemeral-877
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/storage.k8s.io/v1/csidrivers/csi-hostpath-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.160: INFO: deleting *v1.ServiceAccount: ephemeral-877-6413/csi-hostpathplugin-sa
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-877-6413/serviceaccounts/csi-hostpathplugin-sa": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.160: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-ephemeral-877
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-hostpathplugin-attacher-cluster-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.160: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-ephemeral-877
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-hostpathplugin-health-monitor-controller-cluster-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.160: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-ephemeral-877
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-hostpathplugin-provisioner-cluster-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.160: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-ephemeral-877
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-hostpathplugin-resizer-cluster-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.160: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-ephemeral-877
    Jul  8 07:34:35.161: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-hostpathplugin-snapshotter-cluster-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.161: INFO: deleting *v1.RoleBinding: ephemeral-877-6413/csi-hostpathplugin-attacher-role
    Jul  8 07:34:35.161: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/rolebindings/csi-hostpathplugin-attacher-role": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.161: INFO: deleting *v1.RoleBinding: ephemeral-877-6413/csi-hostpathplugin-health-monitor-controller-role
    Jul  8 07:34:35.161: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/rolebindings/csi-hostpathplugin-health-monitor-controller-role": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.161: INFO: deleting *v1.RoleBinding: ephemeral-877-6413/csi-hostpathplugin-provisioner-role
    Jul  8 07:34:35.161: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/rolebindings/csi-hostpathplugin-provisioner-role": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.161: INFO: deleting *v1.RoleBinding: ephemeral-877-6413/csi-hostpathplugin-resizer-role
    Jul  8 07:34:35.161: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/rolebindings/csi-hostpathplugin-resizer-role": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.161: INFO: deleting *v1.RoleBinding: ephemeral-877-6413/csi-hostpathplugin-snapshotter-role
    Jul  8 07:34:35.161: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/ephemeral-877-6413/rolebindings/csi-hostpathplugin-snapshotter-role": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.161: INFO: deleting *v1.StatefulSet: ephemeral-877-6413/csi-hostpathplugin
    Jul  8 07:34:35.161: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/apps/v1/namespaces/ephemeral-877-6413/statefulsets/csi-hostpathplugin": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.162: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-877
    Jul  8 07:34:35.162: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/psp-csi-hostpath-role-ephemeral-877": dial tcp 127.0.0.1:46737: connect: connection refused
    STEP: deleting the driver namespace: ephemeral-877-6413 07/08/22 07:34:35.164
    Jul  8 07:34:35.165: INFO: error deleting namespace ephemeral-877-6413: Delete "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-877-6413": dial tcp 127.0.0.1:46737: connect: connection refused
    [AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
      test/e2e/framework/framework.go:187
    Jul  8 07:34:35.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    Jul  8 07:34:35.165: FAIL: All nodes should be ready after test, Get "https://127.0.0.1:46737/api/v1/nodes": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace

    STEP: Destroying namespace "ephemeral-877" for this suite. 07/08/22 07:34:35.165
    STEP: Collecting events from namespace "ephemeral-877". 07/08/22 07:34:35.167
    Jul  8 07:34:35.167: INFO: Unexpected error: failed to list events in namespace "ephemeral-877": 
        <*url.Error | 0xc002db6510>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-877/events",
            Err: <*net.OpError | 0xc003b6e4b0>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc00019a880>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.167: FAIL: failed to list events in namespace "ephemeral-877": Get "https://127.0.0.1:46737/api/v1/namespaces/ephemeral-877/events": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0005b6830, {0xc0038bd230, 0xd})
    	test/e2e/framework/util.go:909 +0x191
    k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7b33508, 0xc00374d200}, {0xc0038bd230, 0xd})
    	test/e2e/framework/util.go:927 +0x8d
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1()
    	test/e2e/framework/framework.go:402 +0x77d
    panic({0x6d6dac0, 0xc0021bb040})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc00097f260})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0013de780, 0x9c}, {0xc002fd7788?, 0x721e46e?, 0xc002fd77b0?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Failf({0x72ee744?, 0xc00374d200?}, {0xc002fd7a78?, 0x722c850?, 0x9?})
    	test/e2e/framework/log.go:51 +0x12c
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d8c000)
    	test/e2e/framework/framework.go:483 +0x745
    [ReportAfterEach] TOP-LEVEL
... skipping 75 lines ...
      test/e2e/e2e_test.go:142
  << End Captured GinkgoWriter Output

  Driver local doesn't support InlineVolume -- skipping
  In [BeforeEach] at: test/e2e/storage/framework/testsuite.go:116
------------------------------
• [FAILED] [270.311 seconds]
[sig-storage] CSI mock volume [AfterEach]
test/e2e/framework/framework.go:187
  CSIServiceAccountToken
  test/e2e/storage/csi_mock_volume.go:1574
    token should be plumbed down when csiServiceAccountTokenEnabled=true
    test/e2e/storage/csi_mock_volume.go:1602

  Begin Captured StdOut/StdErr Output >>
    {"msg":"FAILED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","completed":15,"skipped":180,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true"]}
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [sig-storage] CSI mock volume
      test/e2e/framework/framework.go:186
    STEP: Creating a kubernetes client 07/08/22 07:30:04.888
... skipping 65 lines ...
    Jul  8 07:30:41.060: INFO: Pod "pvc-volume-tester-fx7t7" satisfied condition "running"
    STEP: Deleting the previously created pod 07/08/22 07:30:46.061
    Jul  8 07:30:46.061: INFO: Deleting pod "pvc-volume-tester-fx7t7" in namespace "csi-mock-volumes-1049"
    Jul  8 07:30:46.081: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fx7t7" to be fully deleted
    STEP: Checking CSI driver logs 07/08/22 07:30:56.093
    Jul  8 07:30:56.102: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IkZ2Q3FPY0ItNFd6YkFBWU1FbFM1Um96R3otRWRZMHRhQUl5T2lHdW8xQTAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU3MjY2MDI3LCJpYXQiOjE2NTcyNjU0MjcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTEwNDkiLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLWZ4N3Q3IiwidWlkIjoiNTIzNDYyYjAtOTcxZi00ODk4LWJlZTgtMjU1NzIzMGU2MjJmIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiMmU3MTJmM2EtOGJkNC00ODkwLWJmNmItZGIzZmQyNGFmYWQ5In19LCJuYmYiOjE2NTcyNjU0MjcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTEwNDk6ZGVmYXVsdCJ9.lrFyUzGSQGNT25Yylgjry_iwfEFvWC64eIQVuDlVz1Jsf_VzlkznLwhkgMidA2AsIJO8r8sDwr_goOA_BU270EEHGHx6X69B3keticLbvcup--oyy9uCByMrSMos2gKt7NxFMK4xRCh30XGtwM0_cHkKsej785lNis5N5fzXpBKhLik22aZElFpRiuktAKFopk4UXdkYW0vPTYAo9kNBijjilSRTbbVUu9fhiTu39noKRBxKqgaULpRy5PPMyEsCV1f4ndlY7Jbf31DvytrituqJK7TWfB_9emCbicDttyTjV-kOsDUUhsle3NlkLJthC1cBocGN1jPE_HcQ-_3tOw","expirationTimestamp":"2022-07-08T07:40:27Z"}}
    Jul  8 07:30:56.102: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"cf19758a-fe8f-11ec-89f4-3ef83c2770a1","target_path":"/var/lib/kubelet/pods/523462b0-971f-4898-bee8-2557230e622f/volumes/kubernetes.io~csi/pvc-3e4d4e00-b03e-4e03-959d-630ca076a42e/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
    STEP: Deleting pod pvc-volume-tester-fx7t7 07/08/22 07:30:56.102
    Jul  8 07:30:56.102: INFO: Deleting pod "pvc-volume-tester-fx7t7" in namespace "csi-mock-volumes-1049"
    STEP: Deleting claim pvc-49qsk 07/08/22 07:30:56.105
    Jul  8 07:30:56.114: INFO: Waiting up to 2m0s for PersistentVolume pvc-3e4d4e00-b03e-4e03-959d-630ca076a42e to get deleted
    Jul  8 07:30:56.124: INFO: PersistentVolume pvc-3e4d4e00-b03e-4e03-959d-630ca076a42e found and phase=Bound (9.174831ms)
    Jul  8 07:30:58.134: INFO: PersistentVolume pvc-3e4d4e00-b03e-4e03-959d-630ca076a42e was removed
    STEP: Deleting storageclass csi-mock-volumes-1049-sc2bwmm 07/08/22 07:30:58.134
    STEP: Cleaning up resources 07/08/22 07:30:58.14
    STEP: deleting the test namespace: csi-mock-volumes-1049 07/08/22 07:30:58.14
    STEP: Waiting for namespaces [csi-mock-volumes-1049] to vanish 07/08/22 07:30:58.152
    ERROR: get pod list in csi-mock-volumes-1049-3563: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-1049-3563/pods": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: EOF
    Jul  8 07:34:35.175: INFO: error deleting namespace csi-mock-volumes-1049: Get "https://127.0.0.1:46737/api/v1/namespaces": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: EOF
    STEP: uninstalling csi mock driver 07/08/22 07:34:35.175
    Jul  8 07:34:35.175: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1049-3563/csi-attacher
    Jul  8 07:34:35.176: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-1049-3563/serviceaccounts/csi-attacher": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.176: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1049
    Jul  8 07:34:35.176: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-attacher-runner-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.176: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1049
    Jul  8 07:34:35.176: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-attacher-role-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.176: INFO: deleting *v1.Role: csi-mock-volumes-1049-3563/external-attacher-cfg-csi-mock-volumes-1049
    Jul  8 07:34:35.176: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-1049-3563/roles/external-attacher-cfg-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.176: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1049-3563/csi-attacher-role-cfg
    Jul  8 07:34:35.176: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-1049-3563/rolebindings/csi-attacher-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.176: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1049-3563/csi-provisioner
    Jul  8 07:34:35.176: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-1049-3563/serviceaccounts/csi-provisioner": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.176: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1049
    Jul  8 07:34:35.177: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-provisioner-runner-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.177: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1049
    Jul  8 07:34:35.177: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-provisioner-role-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.177: INFO: deleting *v1.Role: csi-mock-volumes-1049-3563/external-provisioner-cfg-csi-mock-volumes-1049
    Jul  8 07:34:35.177: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-1049-3563/roles/external-provisioner-cfg-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.177: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1049-3563/csi-provisioner-role-cfg
    Jul  8 07:34:35.177: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-1049-3563/rolebindings/csi-provisioner-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.177: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1049-3563/csi-resizer
    Jul  8 07:34:35.177: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-1049-3563/serviceaccounts/csi-resizer": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.177: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1049
    Jul  8 07:34:35.177: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-resizer-runner-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.177: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1049
    Jul  8 07:34:35.177: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-resizer-role-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.177: INFO: deleting *v1.Role: csi-mock-volumes-1049-3563/external-resizer-cfg-csi-mock-volumes-1049
    Jul  8 07:34:35.178: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-1049-3563/roles/external-resizer-cfg-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.178: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1049-3563/csi-resizer-role-cfg
    Jul  8 07:34:35.178: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-1049-3563/rolebindings/csi-resizer-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.178: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1049-3563/csi-snapshotter
    Jul  8 07:34:35.178: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-1049-3563/serviceaccounts/csi-snapshotter": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.178: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1049
    Jul  8 07:34:35.178: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-snapshotter-runner-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.178: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1049
    Jul  8 07:34:35.178: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshotter-role-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.178: INFO: deleting *v1.Role: csi-mock-volumes-1049-3563/external-snapshotter-leaderelection-csi-mock-volumes-1049
    Jul  8 07:34:35.178: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-1049-3563/roles/external-snapshotter-leaderelection-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.178: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1049-3563/external-snapshotter-leaderelection
    Jul  8 07:34:35.178: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-1049-3563/rolebindings/external-snapshotter-leaderelection": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.178: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1049-3563/csi-mock
    Jul  8 07:34:35.178: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-1049-3563/serviceaccounts/csi-mock": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.178: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1049
    Jul  8 07:34:35.179: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-attacher-role-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.179: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1049
    Jul  8 07:34:35.179: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-provisioner-role-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.179: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1049
    Jul  8 07:34:35.179: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.179: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1049
    Jul  8 07:34:35.186: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/psp-csi-controller-driver-registrar-role-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.186: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1049
    Jul  8 07:34:35.186: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-resizer-role-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.186: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1049
    Jul  8 07:34:35.187: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-snapshotter-role-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.187: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1049
    Jul  8 07:34:35.191: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/storage.k8s.io/v1/storageclasses/csi-mock-sc-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.191: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1049-3563/csi-mockplugin
    Jul  8 07:34:35.191: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/apps/v1/namespaces/csi-mock-volumes-1049-3563/statefulsets/csi-mockplugin": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.191: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1049
    Jul  8 07:34:35.192: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/storage.k8s.io/v1/csidrivers/csi-mock-csi-mock-volumes-1049": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.192: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1049-3563/csi-mockplugin-attacher
    Jul  8 07:34:35.192: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/apps/v1/namespaces/csi-mock-volumes-1049-3563/statefulsets/csi-mockplugin-attacher": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-1049-3563: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-1049-3563/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    STEP: deleting the driver namespace: csi-mock-volumes-1049-3563 07/08/22 07:34:35.192
    Jul  8 07:34:35.194: INFO: error deleting namespace csi-mock-volumes-1049-3563: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-1049-3563": dial tcp 127.0.0.1:46737: connect: connection refused
    [AfterEach] [sig-storage] CSI mock volume
      test/e2e/framework/framework.go:187
    Jul  8 07:34:35.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    Jul  8 07:34:35.196: FAIL: All nodes should be ready after test, Get "https://127.0.0.1:46737/api/v1/nodes": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace

    STEP: Destroying namespace "csi-mock-volumes-1049" for this suite. 07/08/22 07:34:35.196
    STEP: Collecting events from namespace "csi-mock-volumes-1049". 07/08/22 07:34:35.197
    Jul  8 07:34:35.199: INFO: Unexpected error: failed to list events in namespace "csi-mock-volumes-1049": 
        <*url.Error | 0xc0021dc0c0>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-1049/events",
            Err: <*net.OpError | 0xc0004d2000>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc003ce7ba0>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.199: FAIL: failed to list events in namespace "csi-mock-volumes-1049": Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-1049/events": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0004f0830, {0xc003921e18, 0x15})
    	test/e2e/framework/util.go:909 +0x191
    k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7b33508, 0xc000572d80}, {0xc003921e18, 0x15})
    	test/e2e/framework/util.go:927 +0x8d
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1()
    	test/e2e/framework/framework.go:402 +0x77d
    panic({0x6d6dac0, 0xc002ed39c0})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc002d45180})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0021314a0, 0x9c}, {0xc002c0d788?, 0x721e46e?, 0xc002c0d7b0?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Failf({0x72ee744?, 0xc000572d80?}, {0xc002c0da78?, 0x724a217?, 0x10?})
    	test/e2e/framework/log.go:51 +0x12c
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d9d600)
    	test/e2e/framework/framework.go:483 +0x745
    [ReportAfterEach] TOP-LEVEL
... skipping 223 lines ...
      test/e2e/e2e_test.go:142
  << End Captured GinkgoWriter Output

  Driver hostPathSymlink doesn't support DynamicPV -- skipping
  In [BeforeEach] at: test/e2e/storage/framework/testsuite.go:116
------------------------------
• [FAILED] [286.760 seconds]
[sig-storage] CSI mock volume [AfterEach]
test/e2e/framework/framework.go:187
  Delegate FSGroup to CSI driver [LinuxOnly]
  test/e2e/storage/csi_mock_volume.go:1719
    should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
    test/e2e/storage/csi_mock_volume.go:1735

  Begin Captured StdOut/StdErr Output >>
    {"msg":"FAILED [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP","completed":18,"skipped":148,"failed":1,"failures":["[sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP"]}
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [sig-storage] CSI mock volume
      test/e2e/framework/framework.go:186
    STEP: Creating a kubernetes client 07/08/22 07:29:48.793
... skipping 39 lines ...
    Jul  8 07:29:49.261: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8385
    Jul  8 07:29:49.268: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8385
    Jul  8 07:29:49.276: INFO: creating *v1.StatefulSet: csi-mock-volumes-8385-7384/csi-mockplugin
    Jul  8 07:29:49.284: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8385
    Jul  8 07:29:49.293: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8385"
    Jul  8 07:29:49.301: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8385 to register on node kind-worker
    I0708 07:30:03.190934   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
    I0708 07:30:03.193296   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8385","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
    I0708 07:30:03.195089   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
    I0708 07:30:03.198184   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
    I0708 07:30:03.330656   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8385","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
    I0708 07:30:04.100415   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8385"},"Error":"","FullError":null}
    STEP: Creating pod with fsGroup 07/08/22 07:30:15.747
    Jul  8 07:30:15.750: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
    Jul  8 07:30:15.779: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-n9b59] to have phase Bound
    I0708 07:30:15.792042   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2"}}},"Error":"","FullError":null}
    Jul  8 07:30:15.797: INFO: PersistentVolumeClaim pvc-n9b59 found but phase is Pending instead of Bound.
    Jul  8 07:30:17.807: INFO: PersistentVolumeClaim pvc-n9b59 found and phase=Bound (2.026999195s)
    Jul  8 07:30:17.817: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-8l6pf" in namespace "csi-mock-volumes-8385" to be "running"
    Jul  8 07:30:17.836: INFO: Pod "pvc-volume-tester-8l6pf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.010861ms
    I0708 07:30:19.570572   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
    I0708 07:30:19.573849   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
    I0708 07:30:19.576623   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
    Jul  8 07:30:19.579: INFO: >>> kubeConfig: /root/.kube/kind-test-config
    Jul  8 07:30:19.580: INFO: ExecWithOptions: Clientset creation
    Jul  8 07:30:19.580: INFO: ExecWithOptions: execute(POST https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-8385%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-8385%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
    I0708 07:30:19.700776   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-8385/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2","storage.kubernetes.io/csiProvisionerIdentity":"1657265403199-8081-csi-mock-csi-mock-volumes-8385"}},"Response":{},"Error":"","FullError":null}
    I0708 07:30:19.703855   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
    I0708 07:30:19.706276   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
    I0708 07:30:19.709452   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
    Jul  8 07:30:19.711: INFO: >>> kubeConfig: /root/.kube/kind-test-config
    Jul  8 07:30:19.712: INFO: ExecWithOptions: Clientset creation
    Jul  8 07:30:19.712: INFO: ExecWithOptions: execute(POST https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F2ae9d696-2471-4485-bb92-3191fcbd1fa7%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F2ae9d696-2471-4485-bb92-3191fcbd1fa7%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
    Jul  8 07:30:19.834: INFO: >>> kubeConfig: /root/.kube/kind-test-config
    Jul  8 07:30:19.834: INFO: ExecWithOptions: Clientset creation
    Jul  8 07:30:19.835: INFO: ExecWithOptions: execute(POST https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F2ae9d696-2471-4485-bb92-3191fcbd1fa7%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F2ae9d696-2471-4485-bb92-3191fcbd1fa7%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true)
    Jul  8 07:30:19.847: INFO: Pod "pvc-volume-tester-8l6pf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029916457s
    Jul  8 07:30:19.940: INFO: >>> kubeConfig: /root/.kube/kind-test-config
    Jul  8 07:30:19.941: INFO: ExecWithOptions: Clientset creation
    Jul  8 07:30:19.942: INFO: ExecWithOptions: execute(POST https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F2ae9d696-2471-4485-bb92-3191fcbd1fa7%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
    I0708 07:30:20.073311   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-8385/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","target_path":"/var/lib/kubelet/pods/2ae9d696-2471-4485-bb92-3191fcbd1fa7/volumes/kubernetes.io~csi/pvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2","storage.kubernetes.io/csiProvisionerIdentity":"1657265403199-8081-csi-mock-csi-mock-volumes-8385"}},"Response":{},"Error":"","FullError":null}
    Jul  8 07:30:21.850: INFO: Pod "pvc-volume-tester-8l6pf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033784642s
    Jul  8 07:30:23.849: INFO: Pod "pvc-volume-tester-8l6pf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032738161s
    Jul  8 07:30:25.855: INFO: Pod "pvc-volume-tester-8l6pf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038343662s
    Jul  8 07:30:27.840: INFO: Pod "pvc-volume-tester-8l6pf": Phase="Running", Reason="", readiness=true. Elapsed: 10.023604729s
    Jul  8 07:30:27.840: INFO: Pod "pvc-volume-tester-8l6pf" satisfied condition "running"
    STEP: Deleting pod pvc-volume-tester-8l6pf 07/08/22 07:30:27.84
    Jul  8 07:30:27.840: INFO: Deleting pod "pvc-volume-tester-8l6pf" in namespace "csi-mock-volumes-8385"
    Jul  8 07:30:27.846: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8l6pf" to be fully deleted
    Jul  8 07:30:58.254: INFO: >>> kubeConfig: /root/.kube/kind-test-config
    Jul  8 07:30:58.255: INFO: ExecWithOptions: Clientset creation
    Jul  8 07:30:58.255: INFO: ExecWithOptions: execute(POST https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F2ae9d696-2471-4485-bb92-3191fcbd1fa7%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true)
    I0708 07:30:58.367092   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2ae9d696-2471-4485-bb92-3191fcbd1fa7/volumes/kubernetes.io~csi/pvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2/mount"},"Response":{},"Error":"","FullError":null}
    I0708 07:30:58.459732   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
    I0708 07:30:58.462164   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-8385/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount"},"Response":{},"Error":"","FullError":null}
    STEP: Deleting claim pvc-n9b59 07/08/22 07:30:59.866
    Jul  8 07:30:59.875: INFO: Waiting up to 2m0s for PersistentVolume pvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2 to get deleted
    Jul  8 07:30:59.885: INFO: PersistentVolume pvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2 found and phase=Bound (10.428434ms)
    I0708 07:30:59.911572   82232 csi.go:436] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
    Jul  8 07:31:01.937: INFO: PersistentVolume pvc-cd2cf5a0-3486-4fc3-9378-e3f6c38075a2 was removed
    STEP: Deleting storageclass csi-mock-volumes-8385-scvpnhm 07/08/22 07:31:01.937
    STEP: Cleaning up resources 07/08/22 07:31:01.944
    STEP: deleting the test namespace: csi-mock-volumes-8385 07/08/22 07:31:07.987
    STEP: Waiting for namespaces [csi-mock-volumes-8385] to vanish 07/08/22 07:31:08.009
    Jul  8 07:34:35.151: INFO: error deleting namespace csi-mock-volumes-8385: Get "https://127.0.0.1:46737/api/v1/namespaces": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: unexpected EOF
    STEP: uninstalling csi mock driver 07/08/22 07:34:35.151
    Jul  8 07:34:35.151: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8385-7384/csi-attacher
    Jul  8 07:34:35.152: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/serviceaccounts/csi-attacher": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.152: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8385
    Jul  8 07:34:35.152: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-attacher-runner-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.152: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8385
    Jul  8 07:34:35.152: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-attacher-role-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.152: INFO: deleting *v1.Role: csi-mock-volumes-8385-7384/external-attacher-cfg-csi-mock-volumes-8385
    Jul  8 07:34:35.152: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-8385-7384/roles/external-attacher-cfg-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.152: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8385-7384/csi-attacher-role-cfg
    Jul  8 07:34:35.152: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-8385-7384/rolebindings/csi-attacher-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.152: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8385-7384/csi-provisioner
    Jul  8 07:34:35.152: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/serviceaccounts/csi-provisioner": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.152: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8385
    Jul  8 07:34:35.152: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-provisioner-runner-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.152: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8385
    Jul  8 07:34:35.153: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-provisioner-role-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.153: INFO: deleting *v1.Role: csi-mock-volumes-8385-7384/external-provisioner-cfg-csi-mock-volumes-8385
    Jul  8 07:34:35.153: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-8385-7384/roles/external-provisioner-cfg-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.153: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8385-7384/csi-provisioner-role-cfg
    Jul  8 07:34:35.153: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-8385-7384/rolebindings/csi-provisioner-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.153: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8385-7384/csi-resizer
    Jul  8 07:34:35.153: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/serviceaccounts/csi-resizer": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.153: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8385
    Jul  8 07:34:35.153: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-resizer-runner-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.153: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8385
    Jul  8 07:34:35.153: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-resizer-role-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.153: INFO: deleting *v1.Role: csi-mock-volumes-8385-7384/external-resizer-cfg-csi-mock-volumes-8385
    Jul  8 07:34:35.153: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-8385-7384/roles/external-resizer-cfg-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.153: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8385-7384/csi-resizer-role-cfg
    Jul  8 07:34:35.154: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-8385-7384/rolebindings/csi-resizer-role-cfg": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.154: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8385-7384/csi-snapshotter
    Jul  8 07:34:35.154: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/serviceaccounts/csi-snapshotter": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.154: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8385
    Jul  8 07:34:35.154: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterroles/external-snapshotter-runner-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.154: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8385
    Jul  8 07:34:35.154: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshotter-role-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.154: INFO: deleting *v1.Role: csi-mock-volumes-8385-7384/external-snapshotter-leaderelection-csi-mock-volumes-8385
    Jul  8 07:34:35.154: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-8385-7384/roles/external-snapshotter-leaderelection-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.154: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8385-7384/external-snapshotter-leaderelection
    Jul  8 07:34:35.154: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-8385-7384/rolebindings/external-snapshotter-leaderelection": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.154: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8385-7384/csi-mock
    Jul  8 07:34:35.155: INFO: deleting failed: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/serviceaccounts/csi-mock": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.155: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8385
    Jul  8 07:34:35.155: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-attacher-role-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.155: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8385
    Jul  8 07:34:35.159: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-provisioner-role-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.159: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8385
    Jul  8 07:34:35.159: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.159: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8385
    Jul  8 07:34:35.159: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/psp-csi-controller-driver-registrar-role-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.159: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8385
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-resizer-role-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.160: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8385
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-snapshotter-role-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.160: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8385
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/storage.k8s.io/v1/storageclasses/csi-mock-sc-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.160: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8385-7384/csi-mockplugin
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/apps/v1/namespaces/csi-mock-volumes-8385-7384/statefulsets/csi-mockplugin": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.160: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8385
    Jul  8 07:34:35.160: INFO: deleting failed: Delete "https://127.0.0.1:46737/apis/storage.k8s.io/v1/csidrivers/csi-mock-csi-mock-volumes-8385": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused - error from a previous attempt: EOF
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    STEP: deleting the driver namespace: csi-mock-volumes-8385-7384 07/08/22 07:34:35.218
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    Jul  8 07:34:35.302: INFO: error deleting namespace csi-mock-volumes-8385-7384: Delete "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384": dial tcp 127.0.0.1:46737: connect: connection refused
    ERROR: get pod list in csi-mock-volumes-8385-7384: Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385-7384/pods": dial tcp 127.0.0.1:46737: connect: connection refused
    [AfterEach] [sig-storage] CSI mock volume
      test/e2e/framework/framework.go:187
    Jul  8 07:34:35.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    Jul  8 07:34:35.452: FAIL: All nodes should be ready after test, Get "https://127.0.0.1:46737/api/v1/nodes": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace

    STEP: Destroying namespace "csi-mock-volumes-8385" for this suite. 07/08/22 07:34:35.452
    STEP: Collecting events from namespace "csi-mock-volumes-8385". 07/08/22 07:34:35.502
    Jul  8 07:34:35.552: INFO: Unexpected error: failed to list events in namespace "csi-mock-volumes-8385": 
        <*url.Error | 0xc001f24600>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385/events",
            Err: <*net.OpError | 0xc0020301e0>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc0027c62c0>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.552: FAIL: failed to list events in namespace "csi-mock-volumes-8385": Get "https://127.0.0.1:46737/api/v1/namespaces/csi-mock-volumes-8385/events": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0036e6830, {0xc002e6c198, 0x15})
    	test/e2e/framework/util.go:909 +0x191
    k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7b33508, 0xc0029ac300}, {0xc002e6c198, 0x15})
    	test/e2e/framework/util.go:927 +0x8d
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1()
    	test/e2e/framework/framework.go:402 +0x77d
    panic({0x6d6dac0, 0xc0048da480})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc000a10620})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002acc140, 0x9c}, {0xc001a6f788?, 0x721e46e?, 0xc001a6f7b0?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Failf({0x72ee744?, 0xc0029ac300?}, {0xc001a6fa78?, 0x724a217?, 0x10?})
    	test/e2e/framework/log.go:51 +0x12c
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000a85e40)
    	test/e2e/framework/framework.go:483 +0x745
    [ReportAfterEach] TOP-LEVEL
... skipping 2 lines ...

  Jul  8 07:34:35.452: All nodes should be ready after test, Get "https://127.0.0.1:46737/api/v1/nodes": dial tcp 127.0.0.1:46737: connect: connection refused
  In [AfterEach] at: vendor/github.com/onsi/ginkgo/v2/internal/suite.go:596
------------------------------
SSSS
------------------------------
• [FAILED] [90.647 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  [It] should *not* be restarted with a non-local redirect http liveness probe
  test/e2e/common/node/container_probe.go:293

  Begin Captured StdOut/StdErr Output >>
    {"msg":"FAILED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","completed":17,"skipped":140,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe"]}
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    [BeforeEach] [sig-node] Probing container
      test/e2e/framework/framework.go:186
    STEP: Creating a kubernetes client 07/08/22 07:33:04.995
... skipping 14 lines ...
    Jul  8 07:33:13.200: INFO: Pod "liveness-db6238c4-9759-4c24-a35f-abcbbce82f29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036794661s
    Jul  8 07:33:15.200: INFO: Pod "liveness-db6238c4-9759-4c24-a35f-abcbbce82f29": Phase="Running", Reason="", readiness=true. Elapsed: 10.037049267s
    Jul  8 07:33:15.200: INFO: Pod "liveness-db6238c4-9759-4c24-a35f-abcbbce82f29" satisfied condition "not pending"
    Jul  8 07:33:15.200: INFO: Started pod liveness-db6238c4-9759-4c24-a35f-abcbbce82f29 in namespace container-probe-6913
    STEP: checking the pod's current state and verifying that restartCount is present 07/08/22 07:33:15.2
    Jul  8 07:33:15.203: INFO: Initial restart count of pod liveness-db6238c4-9759-4c24-a35f-abcbbce82f29 is 0
    Jul  8 07:34:35.640: INFO: Unexpected error: getting pod : 
        <*url.Error | 0xc0026f7620>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6913/pods/liveness-db6238c4-9759-4c24-a35f-abcbbce82f29",
            Err: <*net.OpError | 0xc0026a95e0>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc00325dc00>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.640: FAIL: getting pod : Get "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6913/pods/liveness-db6238c4-9759-4c24-a35f-abcbbce82f29": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000e282c0, 0xc0017eb000, 0x0, 0xc0025ce200?)
    	test/e2e/common/node/container_probe.go:910 +0x96b
    k8s.io/kubernetes/test/e2e/common/node.glob..func2.14()
    	test/e2e/common/node/container_probe.go:300 +0x1b1
    STEP: deleting the pod 07/08/22 07:34:35.64
    [AfterEach] [sig-node] Probing container
      test/e2e/framework/framework.go:187
    STEP: Collecting events from namespace "container-probe-6913". 07/08/22 07:34:35.641
    Jul  8 07:34:35.641: INFO: Unexpected error: failed to list events in namespace "container-probe-6913": 
        <*url.Error | 0xc0026f7bf0>: {
            Op: "Get",
            URL: "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6913/events",
            Err: <*net.OpError | 0xc0026a9810>{
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 5 lines ...
                Err: <*os.SyscallError | 0xc00105e400>{
                    Syscall: "connect",
                    Err: <syscall.Errno>0x6f,
                },
            },
        }
    Jul  8 07:34:35.641: FAIL: failed to list events in namespace "container-probe-6913": Get "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6913/events": dial tcp 127.0.0.1:46737: connect: connection refused

    Full Stack Trace
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0024af770, {0xc002a082d0, 0x14})
    	test/e2e/framework/util.go:909 +0x191
    k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7b33508, 0xc000945c80}, {0xc002a082d0, 0x14})
    	test/e2e/framework/util.go:927 +0x8d
    k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000e282c0, 0x1?)
    	test/e2e/framework/framework.go:181 +0x8b
    k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e282c0)
    	test/e2e/framework/framework.go:435 +0x1e2
    STEP: Destroying namespace "container-probe-6913" for this suite. 07/08/22 07:34:35.641
    Jul  8 07:34:35.642: FAIL: Couldn't delete ns: "container-probe-6913": Delete "https://127.0.0.1:46737/api/v1/namespaces/container-probe-6913": dial tcp 127.0.0.1:46737: connect: connection refused (&url.Error{Op:"Delete", URL:"https://127.0.0.1:46737/api/v1/namespaces/container-probe-6913", Err:(*net.OpError)(0xc0026a99f0)})

    Full Stack Trace
    panic({0x6d6dac0, 0xc001b04640})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()
    	test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d
    panic({0x6d6fc00, 0xc00061c380})
    	/usr/local/go/src/runtime/panic.go:838 +0x207
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000a3e2a0, 0xd2}, {0xc0024af228?, 0x721e46e?, 0xc0024af248?})
    	test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197
    k8s.io/kubernetes/test/e2e/framework.Fail({0xc00163c840, 0xbd}, {0xc0024af2c0?, 0xc00071a7e0?, 0xc0024af2e8?})
    	test/e2e/framework/log.go:63 +0x145
    k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7ad92e0, 0xc0026f7bf0}, {0xc00105e480?, 0x0?, 0x0?})
    	test/e2e/framework/expect.go:76 +0x267
    k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
    	test/e2e/framework/expect.go:43
    k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0024af770, {0xc002a082d0, 0x14})
... skipping 15 lines ...