Recent runs || View in Spyglass
PR | ZeroMagic: [DO NOT MERGE] remove csi proxy |
Result | FAILURE |
Tests | 1 failed / 13 succeeded |
Started | |
Elapsed | 27m44s |
Revision | 71b15210d09da171256ff7dfb0777e792a68a2e8 |
Refs |
1097 |
job-version | v1.27.0-alpha.1.142+f9a3fd2810ed4c |
kubetest-version | v20230117-50d6df3625 |
revision | v1.27.0-alpha.1.142+f9a3fd2810ed4c |
error during make e2e-test: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest kubectl version
kubetest list nodes
kubetest test setup
... skipping 97 lines ... 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 11345 100 11345 0 0 65959 0 --:--:-- --:--:-- --:--:-- 65959 Downloading https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm docker pull k8sprow.azurecr.io/azurefile-csi:e2e-14dc96138aceeb3926b07b5daef5cf84505ae0d4 || make container-all push-manifest push-hostprocess Error response from daemon: manifest for k8sprow.azurecr.io/azurefile-csi:e2e-14dc96138aceeb3926b07b5daef5cf84505ae0d4 not found: manifest unknown: manifest tagged by "e2e-14dc96138aceeb3926b07b5daef5cf84505ae0d4" is not found make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' CGO_ENABLED=0 GOOS=windows go build -a -ldflags "-X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.driverVersion=e2e-14dc96138aceeb3926b07b5daef5cf84505ae0d4 -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.gitCommit=14dc96138aceeb3926b07b5daef5cf84505ae0d4 -X sigs.k8s.io/azurefile-csi-driver/pkg/azurefile.buildDate=2023-01-31T07:42:37Z -s -w -extldflags '-static'" -mod vendor -o _output/amd64/azurefileplugin.exe ./pkg/azurefileplugin docker buildx rm container-builder || true ERROR: no builder "container-builder" found docker buildx create --use --name=container-builder container-builder # enable qemu for arm64 build # https://github.com/docker/buildx/issues/464#issuecomment-741507760 docker run --privileged --rm tonistiigi/binfmt --uninstall qemu-aarch64 Unable to find image 'tonistiigi/binfmt:latest' locally ... skipping 1585 lines ... } ] } docker push k8sprow.azurecr.io/azurefile-csi:e2e-14dc96138aceeb3926b07b5daef5cf84505ae0d4-windows-hp The push refers to repository [k8sprow.azurecr.io/azurefile-csi] An image does not exist locally with the tag: k8sprow.azurecr.io/azurefile-csi make[2]: *** [Makefile:211: push-hostprocess] Error 1 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' make[1]: *** [Makefile:94: e2e-bootstrap] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver' [38;5;243m------------------------------[0m [38;5;9m[BeforeSuite] [FAILED] [602.544 seconds][0m [38;5;9m[1m[BeforeSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:75[0m [38;5;243mBegin Captured GinkgoWriter Output >>[0m Jan 31 07:44:23.141: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used. [38;5;243m<< End Captured GinkgoWriter Output[0m [38;5;9mUnexpected error: <*exec.ExitError | 0xc0004fe000>: { ProcessState: { pid: 15356, status: 512, rusage: { Utime: {Sec: 1269, Usec: 704156}, ... skipping 231 lines ... I0131 07:40:09.583654 1 azure_securitygroupclient.go:64] Azure SecurityGroupsClient (read ops) using rate limit config: QPS=6, bucket=20 I0131 07:40:09.583657 1 azure_securitygroupclient.go:67] Azure SecurityGroupsClient (write ops) using rate limit config: QPS=100, bucket=1000 I0131 07:40:09.583663 1 azure_publicipclient.go:64] Azure PublicIPAddressesClient (read ops) using rate limit config: QPS=6, bucket=20 I0131 07:40:09.583669 1 azure_publicipclient.go:67] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=100, bucket=1000 I0131 07:40:09.595938 1 azure.go:743] Setting up informers for Azure cloud provider I0131 07:40:09.608720 1 shared_informer.go:255] Waiting for caches to sync for tokens W0131 07:40:09.677093 1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret I0131 07:40:09.677255 1 controllermanager.go:564] Starting "bootstrapsigner" I0131 07:40:09.688126 1 controllermanager.go:593] Started "bootstrapsigner" I0131 07:40:09.688314 1 controllermanager.go:564] Starting "root-ca-cert-publisher" I0131 07:40:09.688291 1 shared_informer.go:255] Waiting for caches to sync for bootstrap_signer I0131 07:40:09.696707 1 controllermanager.go:593] Started "root-ca-cert-publisher" I0131 07:40:09.696726 1 controllermanager.go:564] Starting "endpoint" ... skipping 43 lines ... I0131 07:40:09.832696 1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" I0131 07:40:09.832749 1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/csi" I0131 07:40:09.832856 1 controllermanager.go:593] Started "attachdetach" I0131 07:40:09.832869 1 controllermanager.go:564] Starting "serviceaccount" I0131 07:40:09.833053 1 attach_detach_controller.go:328] Starting attach detach controller I0131 07:40:09.833066 1 shared_informer.go:255] Waiting for caches to sync for attach detach W0131 07:40:09.833701 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-10439361-0" does not exist I0131 07:40:09.856627 1 controllermanager.go:593] Started "serviceaccount" I0131 07:40:09.856713 1 controllermanager.go:564] Starting "replicaset" I0131 07:40:09.856884 1 serviceaccounts_controller.go:117] Starting service account controller I0131 07:40:09.856950 1 shared_informer.go:255] Waiting for caches to sync for service account I0131 07:40:09.867888 1 controllermanager.go:593] Started "replicaset" I0131 07:40:09.867908 1 controllermanager.go:564] Starting "disruption" ... skipping 14 lines ... I0131 07:40:10.361485 1 shared_informer.go:262] Caches are synced for TTL I0131 07:40:10.576856 1 ttl_controller.go:276] "Changed ttl annotation" node="k8s-master-10439361-0" new_ttl="0s" I0131 07:40:10.610064 1 controllermanager.go:593] Started "endpointslice" I0131 07:40:10.610259 1 controllermanager.go:564] Starting "job" I0131 07:40:10.610378 1 endpointslice_controller.go:257] Starting endpoint slice controller I0131 07:40:10.610456 1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice I0131 07:40:10.610686 1 topologycache.go:183] Ignoring node k8s-master-10439361-0 because it is not ready: [{MemoryPressure False 2023-01-31 07:40:09 +0000 UTC 2023-01-31 07:40:09 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-31 07:40:09 +0000 UTC 2023-01-31 07:40:09 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-31 07:40:09 +0000 UTC 2023-01-31 07:40:09 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-31 07:40:09 +0000 UTC 2023-01-31 07:40:09 +0000 UTC KubeletNotReady failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "k8s-master-10439361-0" not found}] I0131 07:40:10.611185 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) I0131 07:40:10.770343 1 controllermanager.go:593] Started "job" I0131 07:40:10.770521 1 controllermanager.go:564] Starting "csrcleaner" I0131 07:40:10.770645 1 job_controller.go:184] Starting job controller I0131 07:40:10.770656 1 shared_informer.go:255] Waiting for caches to sync for job I0131 07:40:10.808752 1 controllermanager.go:593] Started "csrcleaner" ... skipping 198 lines ... I0131 07:40:49.478476 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:40:54.479532 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:40:55.390547 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-75bdb78f8b" need=1 creating=1 I0131 07:40:55.390836 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-autoscaler-84bb8dc9d5" need=1 creating=1 I0131 07:40:55.391835 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-75bdb78f8b to 1" I0131 07:40:55.391988 1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-autoscaler-84bb8dc9d5 to 1" I0131 07:40:55.411124 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns-autoscaler" err="Operation cannot be fulfilled on deployments.apps \"coredns-autoscaler\": the object has been modified; please apply your changes to the latest version and try again" I0131 07:40:55.417574 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again" I0131 07:40:55.456450 1 event.go:294] "Event occurred" object="kube-system/coredns-autoscaler-84bb8dc9d5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-autoscaler-84bb8dc9d5-czkw6" I0131 07:40:55.457268 1 event.go:294] "Event occurred" object="kube-system/coredns-75bdb78f8b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-75bdb78f8b-9jb4j" I0131 07:40:56.407323 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8lslq" I0131 07:40:56.446100 1 event.go:294] "Event occurred" object="kube-system/azure-ip-masq-agent" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-mcpnh" I0131 07:40:57.270455 1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/metrics-server-66dd6687d9" need=1 creating=1 I0131 07:40:57.270946 1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-66dd6687d9 to 1" I0131 07:40:57.279995 1 event.go:294] "Event occurred" object="kube-system/metrics-server-66dd6687d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-66dd6687d9-mxssp" I0131 07:40:57.310815 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again" I0131 07:40:57.413940 1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again" I0131 07:40:59.480748 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:41:04.480984 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:41:09.481641 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:41:14.481948 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 E0131 07:41:14.638003 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W0131 07:41:15.163213 1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0131 07:41:19.482416 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:41:24.483400 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 W0131 07:41:25.396420 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="1043k8s000" does not exist I0131 07:41:25.397038 1 topologycache.go:179] Ignoring node k8s-master-10439361-0 because it has an excluded label I0131 07:41:25.397048 1 topologycache.go:183] Ignoring node 1043k8s000 because it is not ready: [{MemoryPressure False 2023-01-31 07:41:25 +0000 UTC 2023-01-31 07:41:25 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-31 07:41:25 +0000 UTC 2023-01-31 07:41:25 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-31 07:41:25 +0000 UTC 2023-01-31 07:41:25 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-31 07:41:25 +0000 UTC 2023-01-31 07:41:25 +0000 UTC KubeletNotReady CSINode is not yet initialized}] I0131 07:41:25.397085 1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true) I0131 07:41:25.414417 1 ttl_controller.go:276] "Changed ttl annotation" node="1043k8s000" new_ttl="0s" I0131 07:41:29.484568 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:41:29.484595 1 node_lifecycle_controller.go:771] Controller observed a new Node: "1043k8s000" ... skipping 13 lines ... I0131 07:41:39.488295 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:41:39.488402 1 node_lifecycle_controller.go:894] Node 1043k8s000 is healthy again, removing all taints I0131 07:41:39.488434 1 node_lifecycle_controller.go:1192] Controller detected that some Nodes are Ready. Exiting master disruption mode. I0131 07:41:44.489136 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:41:44.489244 1 node_lifecycle_controller.go:1215] Controller detected that zone westus2: :0 is now in state . E0131 07:41:44.658731 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W0131 07:41:45.182107 1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0131 07:41:54.494018 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:41:59.494925 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 W0131 07:42:04.267034 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="1043k8s001" does not exist I0131 07:42:04.267323 1 topologycache.go:179] Ignoring node k8s-master-10439361-0 because it has an excluded label I0131 07:42:04.267335 1 topologycache.go:183] Ignoring node 1043k8s001 because it is not ready: [{MemoryPressure False 2023-01-31 07:42:04 +0000 UTC 2023-01-31 07:42:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2023-01-31 07:42:04 +0000 UTC 2023-01-31 07:42:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2023-01-31 07:42:04 +0000 UTC 2023-01-31 07:42:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2023-01-31 07:42:04 +0000 UTC 2023-01-31 07:42:04 +0000 UTC KubeletNotReady CSINode is not yet initialized}] I0131 07:42:04.267461 1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=4000) CPU, true) I0131 07:42:04.281748 1 ttl_controller.go:276] "Changed ttl annotation" node="1043k8s001" new_ttl="0s" I0131 07:42:04.495870 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: westus2: :0 I0131 07:42:04.495893 1 node_lifecycle_controller.go:771] Controller observed a new Node: "1043k8s001" ... skipping 15 lines ... 2023/01/31 07:54:28 Check successfully 2023/01/31 07:54:28 create example deployments begin to create deployment examples ... storageclass.storage.k8s.io/azurefile-csi created Applying config "deploy/example/windows/deployment.yaml" Waiting for deployment "deployment-azurefile-win" rollout to finish: 0 of 1 updated replicas are available... error: timed out waiting for the condition Failed to apply config "deploy/example/windows/deployment.yaml" [38;5;243m------------------------------[0m [38;5;9m[AfterSuite] [FAILED] [304.042 seconds][0m [38;5;9m[1m[AfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:148[0m [38;5;9mUnexpected error: <*exec.ExitError | 0xc0004fe000>: { ProcessState: { pid: 52829, status: 256, rusage: { Utime: {Sec: 0, Usec: 642382}, ... skipping 20 lines ... occurred[0m [38;5;9mIn [1m[AfterSuite][0m[38;5;9m at: [1m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 2 Failures:[0m [38;5;9m[FAIL][0m [0m[38;5;9m[1m[BeforeSuite] [0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;9m[FAIL][0m [0m[38;5;9m[1m[AfterSuite] [0m[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:261[0m [38;5;9m[1mRan 0 of 38 Specs in 906.586 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;14m[1mA BeforeSuite node failed so all tests were skipped.[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mSupport for custom reporters has been removed in V2. Please read the documentation linked to below for Ginkgo's new behavior and for a migration path:[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.4.0[0m --- FAIL: TestE2E (906.59s) FAIL FAIL sigs.k8s.io/azurefile-csi-driver/test/e2e 906.652s FAIL make: *** [Makefile:85: e2e-test] Error 1 2023/01/31 07:59:30 process.go:155: Step 'make e2e-test' finished in 16m52.555829564s 2023/01/31 07:59:30 aksengine_helpers.go:425: downloading /root/tmp1759528722/log-dump.sh from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/31 07:59:30 util.go:70: curl https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump.sh 2023/01/31 07:59:30 process.go:153: Running: chmod +x /root/tmp1759528722/log-dump.sh 2023/01/31 07:59:30 process.go:155: Step 'chmod +x /root/tmp1759528722/log-dump.sh' finished in 3.313479ms 2023/01/31 07:59:30 aksengine_helpers.go:425: downloading /root/tmp1759528722/log-dump-daemonset.yaml from https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/hack/log-dump/log-dump-daemonset.yaml ... skipping 33 lines ... ssh key file /root/.ssh/id_rsa does not exist. Exiting. 2023/01/31 07:59:46 process.go:155: Step 'bash -c /root/tmp1759528722/win-ci-logs-collector.sh kubetest-vpbv4jo5.westus2.cloudapp.azure.com /root/tmp1759528722 /root/.ssh/id_rsa' finished in 4.915441ms 2023/01/31 07:59:46 aksengine.go:1141: Deleting resource group: kubetest-vpbv4jo5. 2023/01/31 08:05:03 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/01/31 08:05:03 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2023/01/31 08:05:03 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 257.133268ms 2023/01/31 08:05:03 main.go:328: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker d45c259ec9f2 ... skipping 4 lines ...