Recent runs || View in Spyglass
PR | ZeroMagic: [DO NOT MERGE] remove csi proxy |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 17m59s |
Revision | 8ae7ba9483575103ffce9ac17afd1444319b64e8 |
Refs |
1097 |
job-version | v1.27.0-alpha.1.88+7b243cef1a81f4 |
kubetest-version | v20230117-50d6df3625 |
revision | v1.27.0-alpha.1.88+7b243cef1a81f4 |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 105 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11345 100 11345 0 0 191k 0 --:--:-- --:--:-- --:--:-- 191k Downloading https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azurefile-csi:e2e-e9f0f8679bdb535612e60b9454ecbf361ce9c385 || make container-all push-manifest push-hostprocess Error response from daemon: manifest for k8sprow.azurecr.io/azurefile-csi:e2e-e9f0f8679bdb535612e60b9454ecbf361ce9c385 not found: manifest unknown: manifest tagged by "e2e-e9f0f8679bdb535612e60b9454ecbf361ce9c385" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.driverVersion=e2e-e9f0f8679bdb535612e60b9454ecbf361ce9c385 -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.gitCommit=e9f0f8679bdb535612e60b9454ecbf361ce9c385 -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.buildDate=2023-01-30T08:08:31Z -s -w -extldflags '-static'" -mod vendor -o _output/amd64/azurefileplugin.exe ./pkg/azurefileplugin # sigs.k8s.io/azurefile-csi-driver/pkg/mounter pkg/mounter/safe_mounter_host_process_windows.go:36:25: cannot use &winNativeCallMounter{} (value of type *winNativeCallMounter) as type CSIProxyMounter in variable declaration: *winNativeCallMounter does not implement CSIProxyMounter (missing CanSafelySkipMountPointCheck method) pkg/mounter/safe_mounter_windows.go:298:15: cannot use NewWinNativeCallMounter() (value of type *winNativeCallMounter) as type mount.Interface in struct literal: *winNativeCallMounter does not implement mount.Interface (missing CanSafelySkipMountPointCheck method) make[2]: *** [Makefile:140: azurefile-windows] Error 2 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' make[1]: *** [Makefile:94: e2e-bootstrap] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' [38;5;243m------------------------------[0m [38;5;9m[BeforeSuite] [FAILED] [72.676 seconds][0m [38;5;9m[1m[BeforeSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:75[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m Jan 30 08:10:16.942: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used. [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mUnexpected error: <*exec.ExitError | 0xc000412680>: { ProcessState: { pid: 15455, status: 512, rusage: { Utime: {Sec: 424, Usec: 456652}, ... skipping 196 lines ... I0130 08:06:56.153580 1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0130 08:06:56.154335 1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1675066015\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1675066015\" (2023-01-30 07:06:55 +0000 UTC to 2024-01-30 07:06:55 +0000 UTC (now=2023-01-30 08:06:56.154297867 +0000 UTC))" I0130 08:06:56.154570 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1675066016\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1675066015\" (2023-01-30 07:06:55 +0000 UTC to 2024-01-30 07:06:55 +0000 UTC (now=2023-01-30 08:06:56.15452677 +0000 UTC))" I0130 08:06:56.154752 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0130 08:06:56.164865 1 secure_serving.go:210] Serving securely on [::]:10257 I0130 08:06:56.165353 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager... E0130 08:07:01.349173 1 leaderelection.go:334] error initially creating leader election record: namespaces "kube-system" not found I0130 08:07:03.446232 1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager I0130 08:07:03.446439 1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="k8s-master-27456096-0_b5e8f710-ccd3-4278-936f-ffbf7b97bd4e became leader" W0130 08:07:03.566821 1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure I0130 08:07:03.574897 1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token I0130 08:07:03.583158 1 azure.go:454] Azure cloudprovider using try backoff: retries=6, exponent=1.500000, duration=5, jitter=1.000000 I0130 08:07:03.603025 1 azure_interfaceclient.go:63] Azure InterfacesClient (read ops) using rate limit config: QPS=6, bucket=20 ... skipping 23 lines ... I0130 08:07:03.637573 1 azure_securitygroupclient.go:64] Azure SecurityGroupsClient (read ops) using rate limit config: QPS=6, bucket=20 I0130 08:07:03.637582 1 azure_securitygroupclient.go:67] Azure SecurityGroupsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0130 08:07:03.637592 1 azure_publicipclient.go:64] Azure PublicIPAddressesClient (read ops) using rate limit config: QPS=6, bucket=20 I0130 08:07:03.637618 1 azure_publicipclient.go:67] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0130 08:07:03.654707 1 azure.go:743] Setting up informers for Azure cloud provider I0130 08:07:03.663041 1 shared_informer.go:255] Waiting for caches to sync for tokens W0130 08:07:03.737046 1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret I0130 08:07:03.737079 1 controllermanager.go:564] Starting "endpointslicemirroring" I0130 08:07:03.743961 1 controllermanager.go:593] Started "endpointslicemirroring" I0130 08:07:03.743985 1 controllermanager.go:564] Starting "daemonset" I0130 08:07:03.744103 1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller I0130 08:07:03.744179 1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring I0130 08:07:03.750191 1 controllermanager.go:593] Started "daemonset" ... skipping 169 lines ... I0130 08:07:07.631591 1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" I0130 08:07:07.631730 1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/csi" I0130 08:07:07.640986 1 controllermanager.go:593] Started "attachdetach" I0130 08:07:07.641028 1 controllermanager.go:564] Starting "resourcequota" I0130 08:07:07.641209 1 attach_detach_controller.go:328] Starting attach detach controller I0130 08:07:07.641230 1 shared_informer.go:255] Waiting for caches to sync for attach detach W0130 08:07:07.641381 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-27456096-0" does not exist W0130 08:07:07.947280 1 shared_informer.go:533] resyncPeriod 19h6m40.722801701s is smaller than resyncCheckPeriod 23h54m19.685549092s and the informer has already started. Changing it to 23h54m19.685549092s I0130 08:07:07.947373 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for jobs.batch I0130 08:07:07.947396 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for cronjobs.batch I0130 08:07:07.947426 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io I0130 08:07:07.947451 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpoints I0130 08:07:07.947476 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for statefulsets.apps ... skipping 102 lines ... I0130 08:07:48.198848 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: northcentralus: :0 I0130 08:07:53.199534 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: northcentralus: :0 I0130 08:07:55.943968 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-75bdb78f8b to 1" I0130 08:07:55.945359 1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-autoscaler-84bb8dc9d5 to 1" I0130 08:07:55.945495 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-75bdb78f8b" need=1 creating=1 I0130 08:07:55.945870 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-autoscaler-84bb8dc9d5" need=1 creating=1 I0130 08:07:55.994321 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns-autoscaler" err="Operation cannot be fulfilled on deployments.apps \"coredns-autoscaler\": the object has been modified; please apply your changes to the latest version and try again" I0130 08:07:55.995316 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again" I0130 08:07:56.054022 1 event.go:294] "Event occurred" object="kube-system/coredns-75bdb78f8b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-75bdb78f8b-5ct96" I0130 08:07:56.064940 1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler-84bb8dc9d5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-autoscaler-84bb8dc9d5-dvj5p" I0130 08:07:58.201680 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: northcentralus: :0 I0130 08:07:58.393471 1 event.go:294] "Event occurred" object="kube-system/azure-ip-masq-agent" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-dxbz5" I0130 08:07:58.607367 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pwrdt" I0130 08:08:02.535744 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/metrics-server-66dd6687d9" need=1 creating=1 I0130 08:08:02.537031 1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-66dd6687d9 to 1" I0130 08:08:02.561305 1 event.go:294] "Event occurred" object="kube-system/metrics-server-66dd6687d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-66dd6687d9-478lf" I0130 08:08:02.611334 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again" I0130 08:08:02.671919 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again" I0130 08:08:03.203550 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: northcentralus: :0 I0130 08:08:08.204523 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: northcentralus: :0 E0130 08:08:08.491602 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W0130 08:08:08.879314 1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0130 08:08:13.204701 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: northcentralus: :0 I0130 08:08:15.031947 1 topologycache.go:179] Ignoring node k8s-master-27456096-0 because it has an excluded label I0130 08:08:15.031975 1 topologycache.go:183] Ignoring node 2745k8s000 because it is not ready: [{MemoryPressure False 2023-01-30 08:08:15 +0000 UTC 2023-01-30 08:08:15 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-30 08:08:15 +0000 UTC 2023-01-30 08:08:15 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-30 08:08:15 +0000 UTC 2023-01-30 08:08:15 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-30 08:08:15 +0000 UTC 2023-01-30 08:08:15 +0000 UTC KubeletNotReady CSINode is not yet initialized}] I0130 08:08:15.033158 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) W0130 08:08:15.033302 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="2745k8s000" does not exist I0130 08:08:15.050317 1 ttl_controller.go:276] "Changed ttl annotation" node="2745k8s000" new_ttl="0s" I0130 08:08:18.205299 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: northcentralus: :0 I0130 08:08:18.205348 1 node_lifecycle_controller.go:771] Controller observed a new Node: "2745k8s000" I0130 08:08:18.205389 1 controller_utils.go:168] "Recording event message for node" event="Registered Node 2745k8s000 in Controller" node="2745k8s000" I0130 08:08:18.205408 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: northcentralus: :1 W0130 08:08:18.205544 1 node_lifecycle_controller.go:1014] Missing timestamp for Node 2745k8s000. Assuming now as a timestamp. I0130 08:08:18.205564 1 node_lifecycle_controller.go:870] Node 2745k8s000 is NotReady as of 2023-01-30 08:08:18.205553121 +0000 UTC m=+87.750096359. Adding it to the Taint queue. I0130 08:08:18.205585 1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0130 08:08:18.205853 1 event.go:294] "Event occurred" object="2745k8s000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node 2745k8s000 event: Registered Node 2745k8s000 in Controller" I0130 08:08:22.854314 1 topologycache.go:179] Ignoring node k8s-master-27456096-0 because it has an excluded label I0130 08:08:22.854535 1 topologycache.go:183] Ignoring node 2745k8s000 because it is not ready: [{MemoryPressure False 2023-01-30 08:08:15 +0000 UTC 2023-01-30 08:08:15 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-30 08:08:15 +0000 UTC 2023-01-30 08:08:15 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-30 08:08:15 +0000 UTC 2023-01-30 08:08:15 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-30 08:08:15 +0000 UTC 2023-01-30 08:08:15 +0000 UTC KubeletNotReady CSINode is not yet initialized}] I0130 08:08:22.854946 1 topologycache.go:183] Ignoring node 2745k8s001 because it is not ready: [{MemoryPressure False 2023-01-30 08:08:22 +0000 UTC 2023-01-30 08:08:22 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-30 08:08:22 +0000 UTC 2023-01-30 08:08:22 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-30 08:08:22 +0000 UTC 2023-01-30 08:08:22 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-30 08:08:22 +0000 UTC 2023-01-30 08:08:22 +0000 UTC KubeletNotReady CSINode is not yet initialized}] I0130 08:08:22.855159 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) W0130 08:08:22.855380 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="2745k8s001" does not exist I0130 08:08:22.865740 1 ttl_controller.go:276] "Changed ttl annotation" node="2745k8s001" new_ttl="0s" I0130 08:08:23.205659 1 node_lifecycle_controller.go:771] Controller observed a new Node: "2745k8s001" I0130 08:08:23.205699 1 controller_utils.go:168] "Recording event message for node" event="Registered Node 2745k8s001 in Controller" node="2745k8s001" I0130 08:08:23.205827 1 node_lifecycle_controller.go:870] Node 2745k8s000 is NotReady as of 2023-01-30 08:08:23.205816567 +0000 UTC m=+92.750359805. Adding it to the Taint queue. W0130 08:08:23.205857 1 node_lifecycle_controller.go:1014] Missing timestamp for Node 2745k8s001. Assuming now as a timestamp. I0130 08:08:23.205872 1 node_lifecycle_controller.go:870] Node 2745k8s001 is NotReady as of 2023-01-30 08:08:23.205863868 +0000 UTC m=+92.750407106. Adding it to the Taint queue. ... skipping 20 lines ... 2023/01/30 08:11:30 Check successfully 2023/01/30 08:11:30 create example deployments begin to create deployment examples ... storageclass.storage.k8s.io/azurefile-csi created Applying config "deploy/example/windows/deployment.yaml" Waiting for deployment "deployment-azurefile-win" rollout to finish: 0 of 1 updated replicas are available... error: timed out waiting for the condition Failed to apply config "deploy/example/windows/deployment.yaml" [38;5;243m------------------------------[0m [38;5;9m[AfterSuite] [FAILED] [302.170 seconds][0m [38;5;9m[1m[AfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:148[0m [38;5;9mUnexpected error: <*exec.ExitError | 0xc000412000>: { ProcessState: { pid: 22683, status: 256, rusage: { Utime: {Sec: 0, Usec: 605508}, ... skipping 20 lines ... occurred[0m [38;5;9mIn [1m[AfterSuite][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 2 Failures:[0m [38;5;9m[FAIL][0m [0m[38;5;9m[1m[BeforeSuite] [0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;9m[FAIL][0m [0m[38;5;9m[1m[AfterSuite] [0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;9m[1mRan 0 of 38 Specs in 374.846 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;14m[1mA BeforeSuite node failed so all tests were skipped.[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m --- FAIL: TestE2E (374.85s) FAIL FAIL sigs.k8s.io/azurefile-csi-driver/test/e2e 374.920s FAIL make: *** [Makefile:85: e2e-test] Error 1 2023/01/30 08:16:32 process.go:155: Step 'make e2e-test' finished in 8m0.716214233s 2023/01/30 08:16:32 aksengine_helpers.go:425: downloading /root/tmp199103665/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/30 08:16:32 util.go:70: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/30 08:16:32 process.go:153: Running: chmod +x /root/tmp199103665/log-dump.sh 2023/01/30 08:16:32 process.go:155: Step 'chmod +x /root/tmp199103665/log-dump.sh' finished in 1.553427ms 2023/01/30 08:16:32 aksengine_helpers.go:425: downloading /root/tmp199103665/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 33 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2023/01/30 08:16:43 process.go:155: Step 'bash -c /root/tmp199103665/win-ci-logs-collector.sh kubetest-3zc8wgsa.northcentralus.cloudapp.azure.com /root/tmp199103665 /root/.ssh/id_rsa' finished in 5.220071ms 2023/01/30 08:16:43 aksengine.go:1141: Deleting resource group: kubetest-3zc8wgsa. 2023/01/30 08:21:47 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/01/30 08:21:47 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2023/01/30 08:21:47 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 231.942385ms 2023/01/30 08:21:47 main.go:328: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ... skipping 3 lines ...