Recent runs || View in Spyglass
PR | andyzhangx: fix: remove python in container image to fix CVE |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 28m25s |
Revision | 8643410eeea99d7ac3cf18ba5ae683a148f77d57 |
Refs |
1169 |
job-version | v1.27.0-alpha.1.12+fab126d7f380b3 |
kubetest-version | v20230117-50d6df3625 |
revision | v1.27.0-alpha.1.12+fab126d7f380b3 |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 97 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11345 100 11345 0 0 149k 0 --:--:-- --:--:-- --:--:-- 149k Downloading https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azurefile-csi:e2e-3f5be661c8e98cc9695ad163f8a924094336f8cd || make container-all push-manifest Error response from daemon: manifest for k8sprow.azurecr.io/azurefile-csi:e2e-3f5be661c8e98cc9695ad163f8a924094336f8cd not found: manifest unknown: manifest tagged by "e2e-3f5be661c8e98cc9695ad163f8a924094336f8cd" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.driverVersion=e2e-3f5be661c8e98cc9695ad163f8a924094336f8cd -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.gitCommit=3f5be661c8e98cc9695ad163f8a924094336f8cd -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.buildDate=2023-01-26T03:34:49Z -s -w -extldflags '-static'" -mod vendor -o _output/amd64/azurefileplugin.exe ./pkg/azurefileplugin docker buildx rm container-builder || true ERROR: no builder "container-builder" found docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally ... skipping 680 lines ... #9 0.175 #9 0.175 The following packages have unmet dependencies: #9 0.176 adduser : Depends: debconf (>= 0.5) but it is not going to be installed or #9 0.176 debconf-2.0 #9 0.176 passwd : Depends: libpam0g (>= 0.99.7.1) but it is not going to be installed #9 0.176 Depends: libpam-modules but it is not going to be installed #9 0.178 E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. #9 ERROR: process "/bin/sh -c apt remove python3.9 libpython3.9-stdlib libpython3.9-minimal python3.9-minimal perl-base tar -y" did not complete successfully: exit code: 100 ------ > [4/4] RUN apt remove python3.9 libpython3.9-stdlib libpython3.9-minimal python3.9-minimal perl-base tar -y: #9 0.175 distribution that some required packages have not yet been created #9 0.175 or been moved out of Incoming. #9 0.175 The following information may help to resolve the situation: #9 0.175 #9 0.175 The following packages have unmet dependencies: #9 0.176 adduser : Depends: debconf (>= 0.5) but it is not going to be installed or #9 0.176 debconf-2.0 #9 0.176 passwd : Depends: libpam0g (>= 0.99.7.1) but it is not going to be installed #9 0.176 Depends: libpam-modules but it is not going to be installed #9 0.178 E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. ------ Dockerfile:22 -------------------- 20 | 21 | RUN apt update && apt upgrade -y && apt-mark unhold libcap2 && clean-install ca-certificates cifs-utils util-linux e2fsprogs mount udev xfsprogs nfs-common netbase 22 | >>> RUN apt remove python3.9 libpython3.9-stdlib libpython3.9-minimal python3.9-minimal perl-base tar -y 23 | 24 | LABEL maintainers="andyzhangx" -------------------- ERROR: failed to solve: process "/bin/sh -c apt remove python3.9 libpython3.9-stdlib libpython3.9-minimal python3.9-minimal perl-base tar -y" did not complete successfully: exit code: 100 make[3]: *** [Makefile:136: container-linux] Error 1 make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build -a -ldflags "-X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.driverVersion=e2e-3f5be661c8e98cc9695ad163f8a924094336f8cd -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.gitCommit=3f5be661c8e98cc9695ad163f8a924094336f8cd -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.buildDate=2023-01-26T03:34:49Z -s -w -extldflags '-static'" -mod vendor -o _output/arm64/azurefileplugin ./pkg/azurefileplugin make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' docker buildx build --pull --output=type=registry --platform="linux/arm64" \ ... skipping 559 lines ... #8 0.709 #8 0.709 The following packages have unmet dependencies: #8 0.716 adduser : Depends: debconf (>= 0.5) but it is not going to be installed or #8 0.717 debconf-2.0 #8 0.717 passwd : Depends: libpam0g (>= 0.99.7.1) but it is not going to be installed #8 0.717 Depends: libpam-modules but it is not going to be installed #8 0.744 E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. #8 ERROR: process "/bin/sh -c apt remove python3.9 libpython3.9-stdlib libpython3.9-minimal python3.9-minimal perl-base tar -y" did not complete successfully: exit code: 100 ------ > [4/4] RUN apt remove python3.9 libpython3.9-stdlib libpython3.9-minimal python3.9-minimal perl-base tar -y: #8 0.709 distribution that some required packages have not yet been created #8 0.709 or been moved out of Incoming. #8 0.709 The following information may help to resolve the situation: #8 0.709 #8 0.709 The following packages have unmet dependencies: #8 0.716 adduser : Depends: debconf (>= 0.5) but it is not going to be installed or #8 0.717 debconf-2.0 #8 0.717 passwd : Depends: libpam0g (>= 0.99.7.1) but it is not going to be installed #8 0.717 Depends: libpam-modules but it is not going to be installed #8 0.744 E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. ------ Dockerfile:22 -------------------- 20 | 21 | RUN apt update && apt upgrade -y && apt-mark unhold libcap2 && clean-install ca-certificates cifs-utils util-linux e2fsprogs mount udev xfsprogs nfs-common netbase 22 | >>> RUN apt remove python3.9 libpython3.9-stdlib libpython3.9-minimal python3.9-minimal perl-base tar -y 23 | 24 | LABEL maintainers="andyzhangx" -------------------- ERROR: failed to solve: process "/bin/sh -c apt remove python3.9 libpython3.9-stdlib libpython3.9-minimal python3.9-minimal perl-base tar -y" did not complete successfully: exit code: 100 make[3]: *** [Makefile:136: container-linux] Error 1 make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' make[2]: *** [Makefile:153: container-all] Error 2 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' make[1]: *** [Makefile:94: e2e-bootstrap] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' [38;5;243m------------------------------[0m [38;5;9m[BeforeSuite] [FAILED] [616.179 seconds][0m [38;5;9m[1m[BeforeSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:75[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m Jan 26 03:37:47.928: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used. [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mUnexpected error: <*exec.ExitError | 0xc000d28000>: { ProcessState: { pid: 15710, status: 512, rusage: { Utime: {Sec: 1587, Usec: 515000}, ... skipping 231 lines ... I0126 03:32:40.652885 1 azure_securitygroupclient.go:64] Azure SecurityGroupsClient (read ops) using rate limit config: QPS=6, bucket=20 I0126 03:32:40.652906 1 azure_securitygroupclient.go:67] Azure SecurityGroupsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0126 03:32:40.652918 1 azure_publicipclient.go:64] Azure PublicIPAddressesClient (read ops) using rate limit config: QPS=6, bucket=20 I0126 03:32:40.652931 1 azure_publicipclient.go:67] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0126 03:32:40.665412 1 azure.go:743] Setting up informers for Azure cloud provider I0126 03:32:40.685628 1 shared_informer.go:255] Waiting for caches to sync for tokens W0126 03:32:40.757369 1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret I0126 03:32:40.757459 1 controllermanager.go:564] Starting "csrsigning" I0126 03:32:40.764963 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/certs/ca.crt::/etc/kubernetes/certs/ca.key" I0126 03:32:40.765575 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/certs/ca.crt::/etc/kubernetes/certs/ca.key" I0126 03:32:40.766466 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/certs/ca.crt::/etc/kubernetes/certs/ca.key" I0126 03:32:40.766990 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/certs/ca.crt::/etc/kubernetes/certs/ca.key" I0126 03:32:40.767910 1 controllermanager.go:593] Started "csrsigning" ... skipping 38 lines ... I0126 03:32:40.796049 1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/storageos" I0126 03:32:40.796106 1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/fc" I0126 03:32:40.796188 1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" I0126 03:32:40.796268 1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/csi" I0126 03:32:40.796435 1 controllermanager.go:593] Started "attachdetach" I0126 03:32:40.796502 1 controllermanager.go:564] Starting "pv-protection" W0126 03:32:40.805025 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-19596913-0" does not exist I0126 03:32:40.805144 1 attach_detach_controller.go:328] Starting attach detach controller I0126 03:32:40.805225 1 shared_informer.go:255] Waiting for caches to sync for attach detach I0126 03:32:40.852857 1 controllermanager.go:593] Started "pv-protection" I0126 03:32:40.852878 1 controllermanager.go:564] Starting "root-ca-cert-publisher" I0126 03:32:40.853408 1 pv_protection_controller.go:79] Starting PV protection controller I0126 03:32:40.853420 1 shared_informer.go:255] Waiting for caches to sync for PV protection ... skipping 234 lines ... I0126 03:33:20.341341 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:33:25.341890 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:33:30.342659 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:33:34.890661 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-75bdb78f8b" need=1 creating=1 I0126 03:33:34.891491 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-75bdb78f8b to 1" I0126 03:33:34.904921 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-autoscaler-84bb8dc9d5" need=1 creating=1 I0126 03:33:34.905273 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again" I0126 03:33:34.906275 1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-autoscaler-84bb8dc9d5 to 1" I0126 03:33:34.934015 1 event.go:294] "Event occurred" object="kube-system/coredns-75bdb78f8b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-75bdb78f8b-shm4p" I0126 03:33:34.935067 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns-autoscaler" err="Operation cannot be fulfilled on deployments.apps \"coredns-autoscaler\": the object has been modified; please apply your changes to the latest version and try again" I0126 03:33:34.939836 1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler-84bb8dc9d5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-autoscaler-84bb8dc9d5-49x9s" I0126 03:33:34.940929 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again" I0126 03:33:35.343089 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:33:35.898852 1 event.go:294] "Event occurred" object="kube-system/azure-ip-masq-agent" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-t7rt2" I0126 03:33:35.907397 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lhlxz" I0126 03:33:38.578531 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/metrics-server-66dd6687d9" need=1 creating=1 I0126 03:33:38.579409 1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-66dd6687d9 to 1" I0126 03:33:38.590658 1 event.go:294] "Event occurred" object="kube-system/metrics-server-66dd6687d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-66dd6687d9-llt25" I0126 03:33:38.637954 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again" I0126 03:33:38.653699 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again" I0126 03:33:40.344579 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:33:45.345248 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 E0126 03:33:45.381023 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W0126 03:33:45.852772 1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0126 03:33:50.345820 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:33:55.346791 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:34:00.347332 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:34:05.347825 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:34:10.348567 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:34:15.349557 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:34:20.350305 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:34:23.949141 1 topologycache.go:179] Ignoring node k8s-master-19596913-0 because it has an excluded label I0126 03:34:23.949159 1 topologycache.go:183] Ignoring node 1959k8s000 because it is not ready: [{MemoryPressure False 2023-01-26 03:34:23 +0000 UTC 2023-01-26 03:34:23 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-26 03:34:23 +0000 UTC 2023-01-26 03:34:23 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-26 03:34:23 +0000 UTC 2023-01-26 03:34:23 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-26 03:34:23 +0000 UTC 2023-01-26 03:34:23 +0000 UTC KubeletNotReady CSINode is not yet initialized}] W0126 03:34:23.949198 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="1959k8s000" does not exist I0126 03:34:23.949206 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) I0126 03:34:23.958030 1 ttl_controller.go:276] "Changed ttl annotation" node="1959k8s000" new_ttl="0s" I0126 03:34:25.350618 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :0 I0126 03:34:25.350642 1 node_lifecycle_controller.go:771] Controller observed a new Node: "1959k8s000" I0126 03:34:25.350672 1 controller_utils.go:168] "Recording event message for node" event="Registered Node 1959k8s000 in Controller" node="1959k8s000" I0126 03:34:25.350699 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: centralus: :1 W0126 03:34:25.350937 1 node_lifecycle_controller.go:1014] Missing timestamp for Node 1959k8s000. Assuming now as a timestamp. I0126 03:34:25.351026 1 node_lifecycle_controller.go:870] Node 1959k8s000 is NotReady as of 2023-01-26 03:34:25.351017003 +0000 UTC m=+115.576028112. Adding it to the Taint queue. I0126 03:34:25.351109 1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0126 03:34:25.351189 1 event.go:294] "Event occurred" object="1959k8s000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node 1959k8s000 event: Registered Node 1959k8s000 in Controller" W0126 03:34:25.421470 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="1959k8s001" does not exist I0126 03:34:25.421803 1 topologycache.go:179] Ignoring node k8s-master-19596913-0 because it has an excluded label I0126 03:34:25.421873 1 topologycache.go:183] Ignoring node 1959k8s000 because it is not ready: [{MemoryPressure False 2023-01-26 03:34:23 +0000 UTC 2023-01-26 03:34:23 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-26 03:34:23 +0000 UTC 2023-01-26 03:34:23 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-26 03:34:23 +0000 UTC 2023-01-26 03:34:23 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-26 03:34:23 +0000 UTC 2023-01-26 03:34:23 +0000 UTC KubeletNotReady CSINode is not yet initialized}] I0126 03:34:25.421963 1 topologycache.go:183] Ignoring node 1959k8s001 because it is not ready: [{MemoryPressure False 2023-01-26 03:34:25 +0000 UTC 2023-01-26 03:34:25 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-26 03:34:25 +0000 UTC 2023-01-26 03:34:25 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-26 03:34:25 +0000 UTC 2023-01-26 03:34:25 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-26 03:34:25 +0000 UTC 2023-01-26 03:34:25 +0000 UTC KubeletNotReady CSINode is not yet initialized}] I0126 03:34:25.422005 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) I0126 03:34:25.429386 1 ttl_controller.go:276] "Changed ttl annotation" node="1959k8s001" new_ttl="0s" I0126 03:34:30.352013 1 node_lifecycle_controller.go:771] Controller observed a new Node: "1959k8s001" ... skipping 24 lines ... 2023/01/26 03:48:05 Check successfully 2023/01/26 03:48:05 create example deployments begin to create deployment examples ... storageclass.storage.k8s.io/azurefile-csi created Applying config "deploy/example/windows/deployment.yaml" Waiting for deployment "deployment-azurefile-win" rollout to finish: 0 of 1 updated replicas are available... error: timed out waiting for the condition Failed to apply config "deploy/example/windows/deployment.yaml" [38;5;243m------------------------------[0m [38;5;9m[AfterSuite] [FAILED] [302.645 seconds][0m [38;5;9m[1m[AfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:148[0m [38;5;9mUnexpected error: <*exec.ExitError | 0xc000d28000>: { ProcessState: { pid: 52852, status: 256, rusage: { Utime: {Sec: 0, Usec: 677757}, ... skipping 20 lines ... occurred[0m [38;5;9mIn [1m[AfterSuite][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 2 Failures:[0m [38;5;9m[FAIL][0m [0m[38;5;9m[1m[BeforeSuite] [0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;9m[FAIL][0m [0m[38;5;9m[1m[AfterSuite] [0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;9m[1mRan 0 of 38 Specs in 918.825 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;14m[1mA BeforeSuite node failed so all tests were skipped.[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m --- FAIL: TestE2E (918.83s) FAIL FAIL sigs.k8s.io/azurefile-csi-driver/test/e2e 918.961s FAIL make: *** [Makefile:85: e2e-test] Error 1 2023/01/26 03:53:07 process.go:155: Step 'make e2e-test' finished in 18m17.133416855s 2023/01/26 03:53:07 aksengine_helpers.go:425: downloading /root/tmp3550779046/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/26 03:53:07 util.go:70: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/26 03:53:07 process.go:153: Running: chmod +x /root/tmp3550779046/log-dump.sh 2023/01/26 03:53:07 process.go:155: Step 'chmod +x /root/tmp3550779046/log-dump.sh' finished in 3.897966ms 2023/01/26 03:53:07 aksengine_helpers.go:425: downloading /root/tmp3550779046/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 33 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2023/01/26 03:53:18 process.go:155: Step 'bash -c /root/tmp3550779046/win-ci-logs-collector.sh kubetest-rpegq46q.centralus.cloudapp.azure.com /root/tmp3550779046 /root/.ssh/id_rsa' finished in 4.034601ms 2023/01/26 03:53:18 aksengine.go:1141: Deleting resource group: kubetest-rpegq46q. 2023/01/26 03:58:36 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/01/26 03:58:36 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2023/01/26 03:58:36 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 291.576748ms 2023/01/26 03:58:36 main.go:328: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker 8549ce7d5b40 ... skipping 4 lines ...