Recent runs || View in Spyglass
PR | andyzhangx: fix: make account search backward compatible |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 32m22s |
Revision | 450484b0a629a48dc66219f95a432bb07f1575ac |
Refs |
1166 |
job-version | v1.27.0-alpha.0.1039+84200d0470ed31 |
kubetest-version | v20230117-50d6df3625 |
revision | v1.27.0-alpha.0.1039+84200d0470ed31 |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 97 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11345 100 11345 0 0 98k 0 --:--:-- --:--:-- --:--:-- 98k Downloading https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azurefile-csi:e2e-bfe5845fbbeff1bc0a117d1a235f6bd38fe3c1d3 || make container-all push-manifest Error response from daemon: manifest for k8sprow.azurecr.io/azurefile-csi:e2e-bfe5845fbbeff1bc0a117d1a235f6bd38fe3c1d3 not found: manifest unknown: manifest tagged by "e2e-bfe5845fbbeff1bc0a117d1a235f6bd38fe3c1d3" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.driverVersion=e2e-bfe5845fbbeff1bc0a117d1a235f6bd38fe3c1d3 -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.gitCommit=bfe5845fbbeff1bc0a117d1a235f6bd38fe3c1d3 -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.buildDate=2023-01-18T03:30:45Z -s -w -extldflags '-static'" -mod vendor -o _output/amd64/azurefileplugin.exe ./pkg/azurefileplugin docker buildx rm container-builder || true ERROR: no builder "container-builder" found docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally ... skipping 810 lines ... #7 101.2 Unpacking libkrb5-3:arm64 (1.18.3-6+deb11u3) over (1.18.3-6+deb11u1) ... #7 102.2 Setting up libkrb5-3:arm64 (1.18.3-6+deb11u3) ... #7 103.5 (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 4151 files and directories currently installed.) #7 103.5 Preparing to unpack .../libgssapi-krb5-2_1.18.3-6+deb11u3_arm64.deb ... #7 103.6 Unpacking libgssapi-krb5-2:arm64 (1.18.3-6+deb11u3) over (1.18.3-6+deb11u1) ... #7 104.5 Setting up libgssapi-krb5-2:arm64 (1.18.3-6+deb11u3) ... #7 105.0 dpkg (subprocess): unable to execute split package reassembly (dpkg-split): Exec format error #7 105.0 dpkg: error processing archive /var/cache/apt/archives/libpcre2-8-0_10.36-2+deb11u1_arm64.deb (--unpack): #7 105.0 subprocess dpkg-split returned error exit status 2 #7 105.0 Errors were encountered while processing: #7 105.0 /var/cache/apt/archives/libpcre2-8-0_10.36-2+deb11u1_arm64.deb #7 105.1 E: Sub-process /usr/bin/dpkg returned an error code (1) #7 105.1 E: Problem executing scripts DPkg::Post-Invoke 'rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true' #7 105.1 E: Sub-process returned an error code #7 ERROR: process "/bin/sh -c apt update && apt upgrade -y && apt-mark unhold libcap2 && clean-install ca-certificates cifs-utils util-linux e2fsprogs mount udev xfsprogs nfs-common netbase" did not complete successfully: exit code: 100 ------ > [3/3] RUN apt update && apt upgrade -y && apt-mark unhold libcap2 && clean-install ca-certificates cifs-utils util-linux e2fsprogs mount udev xfsprogs nfs-common netbase: #7 101.1 Preparing to unpack .../libkrb5-3_1.18.3-6+deb11u3_arm64.deb ... #7 101.2 Unpacking libkrb5-3:arm64 (1.18.3-6+deb11u3) over (1.18.3-6+deb11u1) ... #7 102.2 Setting up libkrb5-3:arm64 (1.18.3-6+deb11u3) ... #7 103.5 (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 4151 files and directories currently installed.) #7 103.5 Preparing to unpack .../libgssapi-krb5-2_1.18.3-6+deb11u3_arm64.deb ... #7 103.6 Unpacking libgssapi-krb5-2:arm64 (1.18.3-6+deb11u3) over (1.18.3-6+deb11u1) ... #7 104.5 Setting up libgssapi-krb5-2:arm64 (1.18.3-6+deb11u3) ... Sub-process /usr/bin/dpkg returned an error code (1) #7 105.1 E: Problem executing scripts DPkg::Post-Invoke 'rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true' #7 105.1 E: Sub-process returned an error code ------ Dockerfile:21 -------------------- 19 | COPY ${binary} /azurefileplugin 20 | 21 | >>> RUN apt update && apt upgrade -y && apt-mark unhold libcap2 && clean-install ca-certificates cifs-utils util-linux e2fsprogs mount udev xfsprogs nfs-common netbase 22 | 23 | LABEL maintainers="andyzhangx" -------------------- ERROR: failed to solve: process "/bin/sh -c apt update && apt upgrade -y && apt-mark unhold libcap2 && clean-install ca-certificates cifs-utils util-linux e2fsprogs mount udev xfsprogs nfs-common netbase" did not complete successfully: exit code: 100 make[3]: *** [Makefile:136: container-linux] Error 1 make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' make[2]: *** [Makefile:153: container-all] Error 2 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' make[1]: *** [Makefile:94: e2e-bootstrap] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' [38;5;243m------------------------------[0m [38;5;9m[BeforeSuite] [FAILED] [572.025 seconds][0m [38;5;9m[1m[BeforeSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:75[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m Jan 18 03:36:11.269: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used. [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mUnexpected error: <*exec.ExitError | 0xc000436020>: { ProcessState: { pid: 15418, status: 512, rusage: { Utime: {Sec: 1444, Usec: 515799}, ... skipping 231 lines ... I0118 03:28:20.467821 1 azure_securitygroupclient.go:64] Azure SecurityGroupsClient (read ops) using rate limit config: QPS=6, bucket=20 I0118 03:28:20.467828 1 azure_securitygroupclient.go:67] Azure SecurityGroupsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0118 03:28:20.467835 1 azure_publicipclient.go:64] Azure PublicIPAddressesClient (read ops) using rate limit config: QPS=6, bucket=20 I0118 03:28:20.467839 1 azure_publicipclient.go:67] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0118 03:28:20.473824 1 azure.go:743] Setting up informers for Azure cloud provider I0118 03:28:20.487836 1 shared_informer.go:255] Waiting for caches to sync for tokens W0118 03:28:20.539520 1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret I0118 03:28:20.539544 1 controllermanager.go:564] Starting "endpointslice" I0118 03:28:20.566179 1 controllermanager.go:593] Started "endpointslice" I0118 03:28:20.566201 1 controllermanager.go:564] Starting "nodelifecycle" I0118 03:28:20.566328 1 topologycache.go:183] Ignoring node k8s-master-18041443-0 because it is not ready: [{MemoryPressure False 2023-01-18 03:28:20 +0000 UTC 2023-01-18 03:28:20 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-18 03:28:20 +0000 UTC 2023-01-18 03:28:20 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-18 03:28:20 +0000 UTC 2023-01-18 03:28:20 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-18 03:28:20 +0000 UTC 2023-01-18 03:28:20 +0000 UTC KubeletNotReady failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "k8s-master-18041443-0" not found}] I0118 03:28:20.566372 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) I0118 03:28:20.566391 1 endpointslice_controller.go:257] Starting endpoint slice controller I0118 03:28:20.566396 1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice I0118 03:28:20.580703 1 node_lifecycle_controller.go:377] Sending events to api server. I0118 03:28:20.580885 1 taint_manager.go:163] "Sending events to api server" I0118 03:28:20.580942 1 node_lifecycle_controller.go:505] Controller will reconcile labels. ... skipping 106 lines ... I0118 03:28:22.342555 1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" I0118 03:28:22.342588 1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/csi" I0118 03:28:22.343160 1 controllermanager.go:593] Started "attachdetach" I0118 03:28:22.343178 1 controllermanager.go:564] Starting "pv-protection" I0118 03:28:22.343269 1 attach_detach_controller.go:328] Starting attach detach controller I0118 03:28:22.343276 1 shared_informer.go:255] Waiting for caches to sync for attach detach W0118 03:28:22.349608 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-18041443-0" does not exist I0118 03:28:22.485010 1 controllermanager.go:593] Started "pv-protection" I0118 03:28:22.485040 1 controllermanager.go:564] Starting "replicationcontroller" I0118 03:28:22.485076 1 pv_protection_controller.go:79] Starting PV protection controller I0118 03:28:22.485090 1 shared_informer.go:255] Waiting for caches to sync for PV protection I0118 03:28:22.642100 1 controllermanager.go:593] Started "replicationcontroller" I0118 03:28:22.642123 1 controllermanager.go:564] Starting "resourcequota" ... skipping 161 lines ... I0118 03:28:59.911673 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:04.912622 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:09.913463 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:10.258048 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-75bdb78f8b" need=1 creating=1 I0118 03:29:10.258206 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-75bdb78f8b to 1" I0118 03:29:10.286657 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-autoscaler-84bb8dc9d5" need=1 creating=1 I0118 03:29:10.287914 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again" I0118 03:29:10.288873 1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-autoscaler-84bb8dc9d5 to 1" I0118 03:29:10.318447 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again" I0118 03:29:10.324740 1 event.go:294] "Event occurred" object="kube-system/coredns-75bdb78f8b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-75bdb78f8b-qlrfp" I0118 03:29:10.359015 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns-autoscaler" err="Operation cannot be fulfilled on deployments.apps \"coredns-autoscaler\": the object has been modified; please apply your changes to the latest version and try again" I0118 03:29:10.359605 1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler-84bb8dc9d5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-autoscaler-84bb8dc9d5-ttm7x" I0118 03:29:11.775532 1 event.go:294] "Event occurred" object="kube-system/azure-ip-masq-agent" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-4b5qz" I0118 03:29:11.801023 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wrjcr" I0118 03:29:14.189512 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/metrics-server-66dd6687d9" need=1 creating=1 I0118 03:29:14.193896 1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-66dd6687d9 to 1" I0118 03:29:14.297238 1 event.go:294] "Event occurred" object="kube-system/metrics-server-66dd6687d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-66dd6687d9-8ft9f" I0118 03:29:14.323883 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again" I0118 03:29:14.914359 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:19.914528 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:24.915406 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 E0118 03:29:25.099403 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W0118 03:29:25.626281 1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0118 03:29:29.916031 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:34.916965 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:39.920018 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:44.920388 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:49.920858 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:54.921201 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:29:59.921689 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:30:04.922588 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:30:09.922900 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:30:13.941154 1 topologycache.go:179] Ignoring node k8s-master-18041443-0 because it has an excluded label I0118 03:30:13.941413 1 topologycache.go:183] Ignoring node 1804k8s001 because it is not ready: [{MemoryPressure False 2023-01-18 03:30:13 +0000 UTC 2023-01-18 03:30:13 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-18 03:30:13 +0000 UTC 2023-01-18 03:30:13 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-18 03:30:13 +0000 UTC 2023-01-18 03:30:13 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-18 03:30:13 +0000 UTC 2023-01-18 03:30:13 +0000 UTC KubeletNotReady CSINode is not yet initialized}] I0118 03:30:13.941582 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) W0118 03:30:13.949994 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="1804k8s001" does not exist I0118 03:30:13.975861 1 ttl_controller.go:276] "Changed ttl annotation" node="1804k8s001" new_ttl="0s" I0118 03:30:14.685582 1 topologycache.go:183] Ignoring node 1804k8s001 because it is not ready: [{MemoryPressure False 2023-01-18 03:30:13 +0000 UTC 2023-01-18 03:30:13 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-18 03:30:13 +0000 UTC 2023-01-18 03:30:13 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-18 03:30:13 +0000 UTC 2023-01-18 03:30:13 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-18 03:30:13 +0000 UTC 2023-01-18 03:30:13 +0000 UTC KubeletNotReady CSINode is not yet initialized}] I0118 03:30:14.685627 1 topologycache.go:183] Ignoring node 1804k8s000 because it is not ready: [{MemoryPressure False 2023-01-18 03:30:14 +0000 UTC 2023-01-18 03:30:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-18 03:30:14 +0000 UTC 2023-01-18 03:30:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-18 03:30:14 +0000 UTC 2023-01-18 03:30:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-18 03:30:14 +0000 UTC 2023-01-18 03:30:14 +0000 UTC KubeletNotReady CSINode is not yet initialized}] I0118 03:30:14.685642 1 topologycache.go:179] Ignoring node k8s-master-18041443-0 because it has an excluded label I0118 03:30:14.685648 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) W0118 03:30:14.685796 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="1804k8s000" does not exist I0118 03:30:14.706838 1 ttl_controller.go:276] "Changed ttl annotation" node="1804k8s000" new_ttl="0s" I0118 03:30:14.923349 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :0 I0118 03:30:14.923371 1 node_lifecycle_controller.go:771] Controller observed a new Node: "1804k8s001" I0118 03:30:14.923395 1 controller_utils.go:168] "Recording event message for node" event="Registered Node 1804k8s001 in Controller" node="1804k8s001" I0118 03:30:14.923414 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: uksouth: :1 I0118 03:30:14.923424 1 node_lifecycle_controller.go:771] Controller observed a new Node: "1804k8s000" ... skipping 27 lines ... 2023/01/18 03:45:47 Check successfully 2023/01/18 03:45:47 create example deployments begin to create deployment examples ... storageclass.storage.k8s.io/azurefile-csi created Applying config "deploy/example/windows/deployment.yaml" Waiting for deployment "deployment-azurefile-win" rollout to finish: 0 of 1 updated replicas are available... error: timed out waiting for the condition Failed to apply config "deploy/example/windows/deployment.yaml" [38;5;243m------------------------------[0m [38;5;9m[AfterSuite] [FAILED] [307.349 seconds][0m [38;5;9m[1m[AfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:148[0m [38;5;9mUnexpected error: <*exec.ExitError | 0xc000436020>: { ProcessState: { pid: 42929, status: 256, rusage: { Utime: {Sec: 1, Usec: 101265}, ... skipping 20 lines ... occurred[0m [38;5;9mIn [1m[AfterSuite][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 2 Failures:[0m [38;5;9m[FAIL][0m [0m[38;5;9m[1m[BeforeSuite] [0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;9m[FAIL][0m [0m[38;5;9m[1m[AfterSuite] [0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;9m[1mRan 0 of 38 Specs in 879.375 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;14m[1mA BeforeSuite node failed so all tests were skipped.[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m --- FAIL: TestE2E (879.38s) FAIL FAIL sigs.k8s.io/azurefile-csi-driver/test/e2e 879.556s FAIL make: *** [Makefile:85: e2e-test] Error 1 2023/01/18 03:50:50 process.go:155: Step 'make e2e-test' finished in 20m5.667502931s 2023/01/18 03:50:50 aksengine_helpers.go:425: downloading /root/tmp3591004475/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/18 03:50:50 util.go:70: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/18 03:50:51 process.go:153: Running: chmod +x /root/tmp3591004475/log-dump.sh 2023/01/18 03:50:51 process.go:155: Step 'chmod +x /root/tmp3591004475/log-dump.sh' finished in 1.255143ms 2023/01/18 03:50:51 aksengine_helpers.go:425: downloading /root/tmp3591004475/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 33 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2023/01/18 03:51:12 process.go:155: Step 'bash -c /root/tmp3591004475/win-ci-logs-collector.sh kubetest-3ioi04ht.uksouth.cloudapp.azure.com /root/tmp3591004475 /root/.ssh/id_rsa' finished in 3.581682ms 2023/01/18 03:51:12 aksengine.go:1141: Deleting resource group: kubetest-3ioi04ht. 2023/01/18 03:57:21 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/01/18 03:57:21 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2023/01/18 03:57:21 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 413.390608ms 2023/01/18 03:57:21 main.go:328: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker c3d5d854cb59 ... skipping 4 lines ...