This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandyzhangx: test: support Win2022 test on capz
ResultFAILURE
Tests 1 failed / 20 succeeded
Started2022-04-13 12:43
Elapsed1h27m
Revisiond7600c07bbfa811ac4d88920710966de02038df5
Refs 1280

Test Failures


AzureDisk CSI Driver End-to-End Tests Dynamic Provisioning [single-az] should create a volume on demand with mount options [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] 17m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AzureDisk\sCSI\sDriver\sEnd\-to\-End\sTests\sDynamic\sProvisioning\s\[single\-az\]\sshould\screate\sa\svolume\son\sdemand\swith\smount\soptions\s\[kubernetes\.io\/azure\-disk\]\s\[disk\.csi\.azure\.com\]\s\[Windows\]$'
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:101
Unexpected error:
    <*errors.errorString | 0xc000496fc0>: {
        s: "pod \"azuredisk-volume-tester-px6qw\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.7 PodIP:192.168.87.3 PodIPs:[{IP:192.168.87.3}] StartTime:2022-04-13 13:01:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:&ContainerStateWaiting{Reason:RunContainerError,Message:context deadline exceeded,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://ef6dded2508af20fd65db07f5e96dedabb9ba9b2b018df4fc9df070e2ac7dbab,}} Ready:false RestartCount:0 Image:mcr.microsoft.com/windows/servercore:ltsc2022 ImageID:mcr.microsoft.com/windows/servercore@sha256:378d85bd8e7388a777d25009f8523d1e95a5f544f131eaada83171045d5b2c1e ContainerID:containerd://ef6dded2508af20fd65db07f5e96dedabb9ba9b2b018df4fc9df070e2ac7dbab Started:0xc0009ff078}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    pod "azuredisk-volume-tester-px6qw" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.7 PodIP:192.168.87.3 PodIPs:[{IP:192.168.87.3}] StartTime:2022-04-13 13:01:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:&ContainerStateWaiting{Reason:RunContainerError,Message:context deadline exceeded,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://ef6dded2508af20fd65db07f5e96dedabb9ba9b2b018df4fc9df070e2ac7dbab,}} Ready:false RestartCount:0 Image:mcr.microsoft.com/windows/servercore:ltsc2022 ImageID:mcr.microsoft.com/windows/servercore@sha256:378d85bd8e7388a777d25009f8523d1e95a5f544f131eaada83171045d5b2c1e ContainerID:containerd://ef6dded2508af20fd65db07f5e96dedabb9ba9b2b018df4fc9df070e2ac7dbab Started:0xc0009ff078}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:807
				
				Click to see stdout/stderrfrom junit_01.xml

Find azuredisk-volume-tester-px6qw mentions in log files | View test history on testgrid


Show 20 Passed Tests

Show 38 Skipped Tests

Error lines from build-log.txt

... skipping 582 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Deploy CAPI
curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.2/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f -
namespace/capi-system created
customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created
... skipping 129 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-ltahza-kubeconfig; do sleep 1; done"
capz-ltahza-kubeconfig                 cluster.x-k8s.io/secret               1      0s
# Get kubeconfig and store it locally.
kubectl get secrets capz-ltahza-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-ltahza-control-plane-j7tzj   NotReady   <none>   5s    v1.23.5
run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and 2 windows machine(s) to become Ready
node/capz-ltah-58kzv condition met
node/capz-ltah-dxbjr condition met
... skipping 923 lines ...
          image: "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.4.0"
          args:
            - "-csi-address=$(ADDRESS)"
            - "-v=2"
            - "-leader-election"
            - "--leader-election-namespace=kube-system"
            - '-handle-volume-inuse-error=false'
            - '-feature-gates=RecoverVolumeExpansionFailure=true'
            - "-timeout=240s"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
... skipping 122 lines ...
Git Commit: N/A
Go Version: go1.18
Platform: linux/amd64
Topology Key: topology.disk.csi.azure.com/zone

Streaming logs below:
W0413 13:00:59.748766   36400 azure_disk_utils.go:165] InitializeCloudFromSecret: failed to get cloud config from secret /: failed to get secret /: resource name may not be empty
STEP: Building a namespace api object, basename azuredisk
W0413 13:00:59.750350   36400 azuredisk.go:185] DisableAvailabilitySetNodes for controller is set as false while current VMType is vmss
I0413 13:00:59.753467   36400 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0413 13:00:59.753508   36400 driver.go:81] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME
I0413 13:00:59.753517   36400 driver.go:81] Enabling controller service capability: CREATE_DELETE_SNAPSHOT
I0413 13:00:59.753523   36400 driver.go:81] Enabling controller service capability: CLONE_VOLUME
... skipping 24 lines ...
Apr 13 13:01:05.988: INFO: PersistentVolumeClaim pvc-wb4pt found and phase=Bound (4.307906824s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Apr 13 13:01:06.299: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-px6qw" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Apr 13 13:01:06.400: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 101.216069ms
Apr 13 13:01:08.503: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203429043s
Apr 13 13:01:10.607: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307661467s
Apr 13 13:01:12.710: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411054681s
Apr 13 13:01:14.814: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514873127s
Apr 13 13:01:16.917: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.617926655s
... skipping 223 lines ...
Apr 13 13:09:09.709: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 8m3.409971933s
Apr 13 13:09:11.819: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 8m5.519791808s
Apr 13 13:09:13.929: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 8m7.630379525s
Apr 13 13:09:16.040: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 8m9.741286625s
Apr 13 13:09:18.150: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 8m11.850514382s
Apr 13 13:09:20.260: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Pending", Reason="", readiness=false. Elapsed: 8m13.96074088s
Apr 13 13:09:22.371: INFO: Pod "azuredisk-volume-tester-px6qw": Phase="Failed", Reason="", readiness=false. Elapsed: 8m16.071689835s
Apr 13 13:09:22.371: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-px6qw"
Apr 13 13:09:22.491: INFO: Pod azuredisk-volume-tester-px6qw has the following logs: 
STEP: Deleting pod azuredisk-volume-tester-px6qw in namespace azuredisk-8081
Apr 13 13:09:22.608: INFO: deleting PVC "azuredisk-8081"/"pvc-wb4pt"
Apr 13 13:09:22.608: INFO: Deleting PersistentVolumeClaim "pvc-wb4pt"
STEP: waiting for claim's PV "pvc-a26e8f44-5d8e-4862-8c02-b85fcebe8848" to be deleted
... skipping 111 lines ...
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:01:04 +0000 UTC - event for pvc-wb4pt: {disk.csi.azure.com_capz-ltahza-md-0-x8njp_4a678e1f-3cc3-4540-aa42-e5cf29b3acb7 } ProvisioningSucceeded: Successfully provisioned volume pvc-a26e8f44-5d8e-4862-8c02-b85fcebe8848
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:01:06 +0000 UTC - event for azuredisk-volume-tester-px6qw: {default-scheduler } Scheduled: Successfully assigned azuredisk-8081/azuredisk-volume-tester-px6qw to capz-ltah-dxbjr
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:01:22 +0000 UTC - event for azuredisk-volume-tester-px6qw: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-a26e8f44-5d8e-4862-8c02-b85fcebe8848" 
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:01:58 +0000 UTC - event for azuredisk-volume-tester-px6qw: {kubelet capz-ltah-dxbjr} Pulling: Pulling image "mcr.microsoft.com/windows/servercore:ltsc2022"
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:07:21 +0000 UTC - event for azuredisk-volume-tester-px6qw: {kubelet capz-ltah-dxbjr} Pulled: Successfully pulled image "mcr.microsoft.com/windows/servercore:ltsc2022" in 5m22.2798351s
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:07:21 +0000 UTC - event for azuredisk-volume-tester-px6qw: {kubelet capz-ltah-dxbjr} Created: Created container volume-tester
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:09:21 +0000 UTC - event for azuredisk-volume-tester-px6qw: {kubelet capz-ltah-dxbjr} Failed: Error: context deadline exceeded
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:09:21 +0000 UTC - event for azuredisk-volume-tester-px6qw: {kubelet capz-ltah-dxbjr} Pulled: Container image "mcr.microsoft.com/windows/servercore:ltsc2022" already present on machine
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:09:25 +0000 UTC - event for azuredisk-volume-tester-px6qw: {kubelet capz-ltah-dxbjr} Started: Started container volume-tester
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:09:26 +0000 UTC - event for azuredisk-volume-tester-px6qw: {kubelet capz-ltah-dxbjr} Killing: Stopping container volume-tester
Apr 13 13:18:03.800: INFO: At 2022-04-13 13:11:28 +0000 UTC - event for azuredisk-volume-tester-px6qw: {kubelet capz-ltah-dxbjr} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "eaca93fc-d67e-4f82-876b-a63b263ffbee" with KillPodSandboxError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Apr 13 13:18:03.902: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Apr 13 13:18:03.902: INFO: 
Apr 13 13:18:04.074: INFO: 
Logging node info for node capz-ltah-58kzv
Apr 13 13:18:04.182: INFO: Node Info: &Node{ObjectMeta:{capz-ltah-58kzv    57729383-baa5-45d0-8db6-220f208ba83b 4186 0 2022-04-13 12:56:38 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-ltah-58kzv kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.20348 topology.disk.csi.azure.com/zone: topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-ltahza cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-ltahza-md-win-d88584c4d-f2scz cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-ltahza-md-win-d88584c4d csi.volume.kubernetes.io/nodeid:{"disk.csi.azure.com":"capz-ltah-58kzv"} kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.6/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.131.1 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:84:d0:38 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{Go-http-client Update v1 2022-04-13 12:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubeadm.alpha.kubernetes.io/cri-socket":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-04-13 12:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-04-13 12:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {calico-node.exe Update v1 2022-04-13 12:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {Go-http-client Update v1 2022-04-13 13:00:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/virtualMachines/capz-ltah-58kzv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-13 13:16:32 +0000 UTC,LastTransitionTime:2022-04-13 12:56:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-13 13:16:32 +0000 UTC,LastTransitionTime:2022-04-13 12:56:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-13 13:16:32 +0000 UTC,LastTransitionTime:2022-04-13 12:56:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-13 13:16:32 +0000 UTC,LastTransitionTime:2022-04-13 12:57:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-ltah-58kzv,},NodeAddress{Type:InternalIP,Address:10.1.0.6,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-ltah-58kzv,SystemUUID:6514C49B-D5BD-4D9E-8786-525D2ECF6A06,BootID:,KernelVersion:10.0.20348.405,OSImage:Windows Server 2022 Datacenter,ContainerRuntimeVersion:containerd://1.6.1,KubeletVersion:v1.23.5,KubeProxyVersion:v1.23.5,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:141afecd724eb757ee41593cc73f35fb19a2ce13cad73179cf3b1cef81a637d1 docker.io/sigwindowstools/calico-install:v3.22.1-hostprocess],SizeBytes:148096127,},ContainerImage{Names:[capzci.azurecr.io/azuredisk-csi@sha256:61a2e32fbb4b6722027417b9fe2ca6be53958707c8544393ba4f6fddb5239705 capzci.azurecr.io/azuredisk-csi:v1.16.0-eb7e0710ddbc5984b82c0c8453e3a2427fa2a635],SizeBytes:141765990,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:b8e6e2f190e58d6b61b7f2a19b0e2e17a377e5b9d7b9528e9b7953d9f9ab59bb docker.io/sigwindowstools/calico-node:v3.22.1-hostprocess],SizeBytes:128793648,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:dbec3a8166686b09b242176ab5b99e993da4126438bbce68147c3fd654f35662 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.4.0],SizeBytes:125554391,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:c96a6255c42766f6b8bb1a7cda02b0060ab1b20b2e2dafcc64ec09e7646745a6 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.5.0],SizeBytes:124471446,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:123925937,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:119397646,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 13 13:18:04.183: INFO: 
... skipping 114 lines ...
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:40
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:43
    should create a volume on demand with mount options [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:101

    Unexpected error:
        <*errors.errorString | 0xc000496fc0>: {
            s: "pod \"azuredisk-volume-tester-px6qw\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.7 PodIP:192.168.87.3 PodIPs:[{IP:192.168.87.3}] StartTime:2022-04-13 13:01:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:&ContainerStateWaiting{Reason:RunContainerError,Message:context deadline exceeded,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://ef6dded2508af20fd65db07f5e96dedabb9ba9b2b018df4fc9df070e2ac7dbab,}} Ready:false RestartCount:0 Image:mcr.microsoft.com/windows/servercore:ltsc2022 ImageID:mcr.microsoft.com/windows/servercore@sha256:378d85bd8e7388a777d25009f8523d1e95a5f544f131eaada83171045d5b2c1e ContainerID:containerd://ef6dded2508af20fd65db07f5e96dedabb9ba9b2b018df4fc9df070e2ac7dbab Started:0xc0009ff078}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
        }
        pod "azuredisk-volume-tester-px6qw" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-tester]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-13 13:01:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.7 PodIP:192.168.87.3 PodIPs:[{IP:192.168.87.3}] StartTime:2022-04-13 13:01:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-tester State:{Waiting:&ContainerStateWaiting{Reason:RunContainerError,Message:context deadline exceeded,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://ef6dded2508af20fd65db07f5e96dedabb9ba9b2b018df4fc9df070e2ac7dbab,}} Ready:false RestartCount:0 Image:mcr.microsoft.com/windows/servercore:ltsc2022 ImageID:mcr.microsoft.com/windows/servercore@sha256:378d85bd8e7388a777d25009f8523d1e95a5f544f131eaada83171045d5b2c1e ContainerID:containerd://ef6dded2508af20fd65db07f5e96dedabb9ba9b2b018df4fc9df070e2ac7dbab Started:0xc0009ff078}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
    occurred

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:807
------------------------------
Dynamic Provisioning [single-az] 
  should create a pod with volume mount subpath [disk.csi.azure.com] [Windows]
... skipping 18 lines ...
Apr 13 13:18:12.743: INFO: PersistentVolumeClaim pvc-8mvpk found but phase is Pending instead of Bound.
Apr 13 13:18:14.846: INFO: PersistentVolumeClaim pvc-8mvpk found and phase=Bound (4.307404099s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Apr 13 13:18:15.154: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-tf9vc" in namespace "azuredisk-2540" to be "Succeeded or Failed"
Apr 13 13:18:15.257: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Pending", Reason="", readiness=false. Elapsed: 102.604669ms
Apr 13 13:18:17.360: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205570001s
Apr 13 13:18:19.463: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308763993s
Apr 13 13:18:21.566: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411683713s
Apr 13 13:18:23.670: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515771262s
Apr 13 13:18:25.773: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.618894564s
... skipping 17 lines ...
Apr 13 13:19:03.650: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Pending", Reason="", readiness=false. Elapsed: 48.4962385s
Apr 13 13:19:05.757: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Running", Reason="", readiness=true. Elapsed: 50.602903403s
Apr 13 13:19:07.862: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Running", Reason="", readiness=true. Elapsed: 52.708434968s
Apr 13 13:19:09.968: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Running", Reason="", readiness=true. Elapsed: 54.81395929s
Apr 13 13:19:12.074: INFO: Pod "azuredisk-volume-tester-tf9vc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 56.920077534s
STEP: Saw pod success
Apr 13 13:19:12.074: INFO: Pod "azuredisk-volume-tester-tf9vc" satisfied condition "Succeeded or Failed"
Apr 13 13:19:12.074: INFO: deleting Pod "azuredisk-2540"/"azuredisk-volume-tester-tf9vc"
Apr 13 13:19:12.187: INFO: Pod azuredisk-volume-tester-tf9vc has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-tf9vc in namespace azuredisk-2540
Apr 13 13:19:12.306: INFO: deleting PVC "azuredisk-2540"/"pvc-8mvpk"
Apr 13 13:19:12.306: INFO: Deleting PersistentVolumeClaim "pvc-8mvpk"
... skipping 46 lines ...
Apr 13 13:19:59.666: INFO: PersistentVolumeClaim pvc-grsz6 found and phase=Bound (4.331304433s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Apr 13 13:19:59.978: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-99g2q" in namespace "azuredisk-4728" to be "Succeeded or Failed"
Apr 13 13:20:00.080: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Pending", Reason="", readiness=false. Elapsed: 101.869427ms
Apr 13 13:20:02.183: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205538289s
Apr 13 13:20:04.288: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30959544s
Apr 13 13:20:06.390: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411798155s
Apr 13 13:20:08.493: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515445324s
Apr 13 13:20:10.597: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.619260361s
... skipping 16 lines ...
Apr 13 13:20:46.369: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Pending", Reason="", readiness=false. Elapsed: 46.39086984s
Apr 13 13:20:48.475: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Pending", Reason="", readiness=false. Elapsed: 48.496937894s
Apr 13 13:20:50.651: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Running", Reason="", readiness=true. Elapsed: 50.67324605s
Apr 13 13:20:52.758: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Running", Reason="", readiness=true. Elapsed: 52.779851831s
Apr 13 13:20:54.863: INFO: Pod "azuredisk-volume-tester-99g2q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 54.884997121s
STEP: Saw pod success
Apr 13 13:20:54.863: INFO: Pod "azuredisk-volume-tester-99g2q" satisfied condition "Succeeded or Failed"
Apr 13 13:20:54.863: INFO: deleting Pod "azuredisk-4728"/"azuredisk-volume-tester-99g2q"
Apr 13 13:20:54.984: INFO: Pod azuredisk-volume-tester-99g2q has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-99g2q in namespace azuredisk-4728
Apr 13 13:20:55.101: INFO: deleting PVC "azuredisk-4728"/"pvc-grsz6"
Apr 13 13:20:55.101: INFO: Deleting PersistentVolumeClaim "pvc-grsz6"
... skipping 102 lines ...
Apr 13 13:21:45.582: INFO: PersistentVolumeClaim pvc-7cn9j found and phase=Bound (4.30764901s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Apr 13 13:21:45.892: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9z95v" in namespace "azuredisk-5356" to be "Error status code"
Apr 13 13:21:45.994: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Pending", Reason="", readiness=false. Elapsed: 101.52077ms
Apr 13 13:21:48.096: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203995487s
Apr 13 13:21:50.199: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307009371s
Apr 13 13:21:52.302: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.409971363s
Apr 13 13:21:54.405: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512334982s
Apr 13 13:21:56.508: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.615348662s
... skipping 13 lines ...
Apr 13 13:22:25.960: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Pending", Reason="", readiness=false. Elapsed: 40.067209005s
Apr 13 13:22:28.066: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Pending", Reason="", readiness=false. Elapsed: 42.173818029s
Apr 13 13:22:30.172: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Pending", Reason="", readiness=false. Elapsed: 44.279981932s
Apr 13 13:22:32.278: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Running", Reason="", readiness=true. Elapsed: 46.38574086s
Apr 13 13:22:34.383: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Running", Reason="", readiness=true. Elapsed: 48.490876046s
Apr 13 13:22:36.489: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Running", Reason="", readiness=true. Elapsed: 50.596814322s
Apr 13 13:22:38.596: INFO: Pod "azuredisk-volume-tester-9z95v": Phase="Failed", Reason="", readiness=false. Elapsed: 52.703384763s
STEP: Saw pod failure
Apr 13 13:22:38.596: INFO: Pod "azuredisk-volume-tester-9z95v" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Apr 13 13:22:38.733: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-9z95v"
Apr 13 13:22:38.840: INFO: Pod azuredisk-volume-tester-9z95v has the following logs: out-file : Access to the path 'C:\mnt\test-1\data' is denied.
At line:1 char:1
+ echo $null >> C:\mnt\test-1\data
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 458 lines ...
Apr 13 13:31:16.422: INFO: PersistentVolumeClaim pvc-8jhfc found and phase=Bound (4.307618968s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Apr 13 13:31:16.735: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-fnrkt" in namespace "azuredisk-59" to be "Succeeded or Failed"
Apr 13 13:31:16.845: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Pending", Reason="", readiness=false. Elapsed: 109.928232ms
Apr 13 13:31:18.951: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21667981s
Apr 13 13:31:21.059: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324495159s
Apr 13 13:31:23.164: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429783006s
Apr 13 13:31:25.272: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537080711s
Apr 13 13:31:27.379: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.644822021s
... skipping 21 lines ...
Apr 13 13:32:13.718: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Running", Reason="", readiness=true. Elapsed: 56.983592194s
Apr 13 13:32:15.828: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Running", Reason="", readiness=true. Elapsed: 59.093764306s
Apr 13 13:32:17.934: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Running", Reason="", readiness=true. Elapsed: 1m1.199533303s
Apr 13 13:32:20.040: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.305695728s
Apr 13 13:32:22.146: INFO: Pod "azuredisk-volume-tester-fnrkt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m5.411314018s
STEP: Saw pod success
Apr 13 13:32:22.146: INFO: Pod "azuredisk-volume-tester-fnrkt" satisfied condition "Succeeded or Failed"
Apr 13 13:32:22.146: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-fnrkt"
Apr 13 13:32:22.360: INFO: Pod azuredisk-volume-tester-fnrkt has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-fnrkt in namespace azuredisk-59
... skipping 181 lines ...
Apr 13 13:34:13.742: INFO: PersistentVolumeClaim pvc-5fzmt found but phase is Pending instead of Bound.
Apr 13 13:34:15.845: INFO: PersistentVolumeClaim pvc-5fzmt found and phase=Bound (4.308768612s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Apr 13 13:34:16.158: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pl6f2" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Apr 13 13:34:16.262: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 104.175562ms
Apr 13 13:34:18.369: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210484757s
Apr 13 13:34:20.474: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316274252s
Apr 13 13:34:22.581: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422304183s
Apr 13 13:34:24.688: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530108656s
Apr 13 13:34:26.794: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.63613104s
... skipping 25 lines ...
Apr 13 13:35:21.555: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.39665177s
Apr 13 13:35:23.662: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Running", Reason="", readiness=true. Elapsed: 1m7.503818634s
Apr 13 13:35:25.767: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Running", Reason="", readiness=true. Elapsed: 1m9.609283458s
Apr 13 13:35:27.874: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Running", Reason="", readiness=true. Elapsed: 1m11.715606614s
Apr 13 13:35:29.979: INFO: Pod "azuredisk-volume-tester-pl6f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m13.821184585s
STEP: Saw pod success
Apr 13 13:35:29.979: INFO: Pod "azuredisk-volume-tester-pl6f2" satisfied condition "Succeeded or Failed"
Apr 13 13:35:29.979: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-pl6f2"
Apr 13 13:35:30.107: INFO: Pod azuredisk-volume-tester-pl6f2 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-pl6f2 in namespace azuredisk-8582
Apr 13 13:35:30.229: INFO: deleting PVC "azuredisk-8582"/"pvc-5fzmt"
Apr 13 13:35:30.229: INFO: Deleting PersistentVolumeClaim "pvc-5fzmt"
... skipping 561 lines ...
Apr 13 13:56:48.744: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-4cmpd] to have phase Bound
Apr 13 13:56:48.846: INFO: PersistentVolumeClaim pvc-4cmpd found and phase=Bound (102.5535ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Apr 13 13:56:49.157: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-nfgvk" in namespace "azuredisk-2984" to be "Error status code"
Apr 13 13:56:49.260: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Pending", Reason="", readiness=false. Elapsed: 102.947724ms
Apr 13 13:56:51.363: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20684083s
Apr 13 13:56:53.467: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310787678s
Apr 13 13:56:55.571: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414439469s
Apr 13 13:56:57.675: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.518437059s
Apr 13 13:56:59.778: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.621661843s
... skipping 10 lines ...
Apr 13 13:57:22.921: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Pending", Reason="", readiness=false. Elapsed: 33.764340929s
Apr 13 13:57:25.029: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Pending", Reason="", readiness=false. Elapsed: 35.872453139s
Apr 13 13:57:27.142: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Pending", Reason="", readiness=false. Elapsed: 37.985716927s
Apr 13 13:57:29.250: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Pending", Reason="", readiness=false. Elapsed: 40.093611631s
Apr 13 13:57:31.358: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Running", Reason="", readiness=true. Elapsed: 42.201661258s
Apr 13 13:57:33.465: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Running", Reason="", readiness=true. Elapsed: 44.308672239s
Apr 13 13:57:35.586: INFO: Pod "azuredisk-volume-tester-nfgvk": Phase="Failed", Reason="", readiness=false. Elapsed: 46.429129115s
STEP: Saw pod failure
Apr 13 13:57:35.586: INFO: Pod "azuredisk-volume-tester-nfgvk" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Apr 13 13:57:35.722: INFO: deleting Pod "azuredisk-2984"/"azuredisk-volume-tester-nfgvk"
Apr 13 13:57:35.834: INFO: Pod azuredisk-volume-tester-nfgvk has the following logs: Out-File : Access to the path 'C:\mnt\test-1\data.txt' is denied.
At line:1 char:22
+ ... cho 'hello world' | Out-File -FilePath C:\mnt\test-1\data.txt; Get-Co ...
+                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 104 lines ...
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should succeed when creating a shared disk [disk.csi.azure.com][windows]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:143
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Apr 13 13:58:30.761: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 46 lines ...
Apr 13 13:58:37.702: INFO: PersistentVolumeClaim pvc-95p2q found and phase=Bound (105.683362ms)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: attaching disk to node#0
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Apr 13 13:58:51.184: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-czqjj" in namespace "azuredisk-1853" to be "Succeeded or Failed"
Apr 13 13:58:51.287: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Pending", Reason="", readiness=false. Elapsed: 103.706313ms
Apr 13 13:58:53.390: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206335999s
Apr 13 13:58:55.494: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310739839s
Apr 13 13:58:57.598: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414766671s
Apr 13 13:58:59.702: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517933074s
Apr 13 13:59:01.806: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.622565999s
... skipping 22 lines ...
Apr 13 13:59:50.308: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Pending", Reason="", readiness=false. Elapsed: 59.124477627s
Apr 13 13:59:52.417: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Running", Reason="", readiness=true. Elapsed: 1m1.232812573s
Apr 13 13:59:54.535: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.351061473s
Apr 13 13:59:56.643: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Running", Reason="", readiness=true. Elapsed: 1m5.459081256s
Apr 13 13:59:58.753: INFO: Pod "azuredisk-volume-tester-czqjj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m7.569162873s
STEP: Saw pod success
Apr 13 13:59:58.753: INFO: Pod "azuredisk-volume-tester-czqjj" satisfied condition "Succeeded or Failed"
Apr 13 13:59:58.753: INFO: deleting Pod "azuredisk-1853"/"azuredisk-volume-tester-czqjj"
Apr 13 13:59:58.871: INFO: Pod azuredisk-volume-tester-czqjj has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-czqjj in namespace azuredisk-1853
Apr 13 13:59:59.208: INFO: deleting PVC "azuredisk-1853"/"pvc-95p2q"
Apr 13 13:59:59.208: INFO: Deleting PersistentVolumeClaim "pvc-95p2q"
... skipping 116 lines ...
Topology Key: topology.disk.csi.azure.com/zone

Streaming logs below:
I0413 13:00:12.893821       1 azuredisk.go:165] driver userAgent: disk.csi.azure.com/v1.16.0-eb7e0710ddbc5984b82c0c8453e3a2427fa2a635 gc/go1.18 (amd64-linux) e2e-test
I0413 13:00:12.894235       1 azure_disk_utils.go:158] reading cloud config from secret kube-system/azure-cloud-provider
I0413 13:00:12.905442       1 round_trippers.go:553] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/secrets/azure-cloud-provider 404 Not Found in 11 milliseconds
W0413 13:00:12.905777       1 azure_disk_utils.go:165] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0413 13:00:12.905809       1 azure_disk_utils.go:170] could not read cloud config from secret kube-system/azure-cloud-provider
I0413 13:00:12.905817       1 azure_disk_utils.go:180] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0413 13:00:12.905843       1 azure_disk_utils.go:188] read cloud config from file: /etc/kubernetes/azure.json successfully
I0413 13:00:12.906412       1 azure_auth.go:245] Using AzurePublicCloud environment
I0413 13:00:12.906452       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0413 13:00:12.906515       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 247 lines ...
I0413 13:17:58.960734       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.255316575 request="azuredisk_csi_driver_controller_delete_volume" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-a26e8f44-5d8e-4862-8c02-b85fcebe8848="" result_code="succeeded"
I0413 13:17:58.960755       1 utils.go:84] GRPC response: {}
I0413 13:18:10.498790       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0413 13:18:10.498816       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":10737418240},"name":"pvc-299d20d2-b083-45f7-8b17-944b230952c4","parameters":{"csi.storage.k8s.io/pv/name":"pvc-299d20d2-b083-45f7-8b17-944b230952c4","csi.storage.k8s.io/pvc/name":"pvc-8mvpk","csi.storage.k8s.io/pvc/namespace":"azuredisk-2540","enableAsyncAttach":"false","networkAccessPolicy":"DenyAll","skuName":"Standard_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0413 13:18:10.499392       1 azure_disk_utils.go:158] reading cloud config from secret kube-system/azure-cloud-provider
I0413 13:18:10.511327       1 round_trippers.go:553] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/secrets/azure-cloud-provider 404 Not Found in 11 milliseconds
W0413 13:18:10.511442       1 azure_disk_utils.go:165] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0413 13:18:10.511464       1 azure_disk_utils.go:170] could not read cloud config from secret kube-system/azure-cloud-provider
I0413 13:18:10.511476       1 azure_disk_utils.go:180] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0413 13:18:10.511528       1 azure_disk_utils.go:188] read cloud config from file: /etc/kubernetes/azure.json successfully
I0413 13:18:10.511765       1 azure_auth.go:245] Using AzurePublicCloud environment
I0413 13:18:10.511893       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0413 13:18:10.511943       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 83 lines ...
I0413 13:19:48.582695       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.188886427 request="azuredisk_csi_driver_controller_delete_volume" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-299d20d2-b083-45f7-8b17-944b230952c4="" result_code="succeeded"
I0413 13:19:48.582726       1 utils.go:84] GRPC response: {}
I0413 13:19:55.289375       1 utils.go:77] GRPC call: /csi.v1.Controller/CreateVolume
I0413 13:19:55.289405       1 utils.go:78] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"topology.disk.csi.azure.com/zone":""}}],"requisite":[{"segments":{"topology.disk.csi.azure.com/zone":""}}]},"capacity_range":{"required_bytes":1099511627776},"name":"pvc-03631ef7-2dd8-47ff-a65e-bc47e397c84f","parameters":{"csi.storage.k8s.io/pv/name":"pvc-03631ef7-2dd8-47ff-a65e-bc47e397c84f","csi.storage.k8s.io/pvc/name":"pvc-grsz6","csi.storage.k8s.io/pvc/namespace":"azuredisk-4728","enableAsyncAttach":"false","enableBursting":"true","perfProfile":"Basic","skuName":"Premium_LRS","userAgent":"azuredisk-e2e-test"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
I0413 13:19:55.290058       1 azure_disk_utils.go:158] reading cloud config from secret kube-system/azure-cloud-provider
I0413 13:19:55.300969       1 round_trippers.go:553] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/secrets/azure-cloud-provider 404 Not Found in 10 milliseconds
W0413 13:19:55.301166       1 azure_disk_utils.go:165] InitializeCloudFromSecret: failed to get cloud config from secret kube-system/azure-cloud-provider: failed to get secret kube-system/azure-cloud-provider: secrets "azure-cloud-provider" not found
I0413 13:19:55.301192       1 azure_disk_utils.go:170] could not read cloud config from secret kube-system/azure-cloud-provider
I0413 13:19:55.301200       1 azure_disk_utils.go:180] use default AZURE_CREDENTIAL_FILE env var: /etc/kubernetes/azure.json
I0413 13:19:55.301238       1 azure_disk_utils.go:188] read cloud config from file: /etc/kubernetes/azure.json successfully
I0413 13:19:55.301487       1 azure_auth.go:245] Using AzurePublicCloud environment
I0413 13:19:55.301591       1 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0413 13:19:55.301640       1 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 1543 lines ...
I0413 13:47:36.985668       1 azure_controller_common.go:341] Trying to detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf from node capz-ltah-dxbjr, diskMap: map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/capz-ltahza/providers/microsoft.compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf:pvc-4ff38da6-97fc-4575-b44d-113140de8bbf]
E0413 13:47:36.985691       1 azure_controller_standard.go:163] detach azure disk on node(capz-ltah-dxbjr): disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/capz-ltahza/providers/microsoft.compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf:pvc-4ff38da6-97fc-4575-b44d-113140de8bbf]) not found
I0413 13:47:36.985705       1 azure_controller_standard.go:184] azureDisk - update(capz-ltahza): vm(capz-ltah-dxbjr) - detach disk list(capz-ltah-dxbjr)%!(EXTRA map[string]string=map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/capz-ltahza/providers/microsoft.compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf:pvc-4ff38da6-97fc-4575-b44d-113140de8bbf])
I0413 13:47:40.579166       1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume
I0413 13:47:40.579218       1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf"}
I0413 13:47:40.579361       1 controllerserver.go:294] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf)
I0413 13:47:40.579408       1 controllerserver.go:296] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf) returned with failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf) since it's in attaching or detaching state
I0413 13:47:40.579514       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.000103099 request="azuredisk_csi_driver_controller_delete_volume" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf="" result_code="failed"
E0413 13:47:40.579535       1 utils.go:82] GRPC error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf) since it's in attaching or detaching state
I0413 13:47:42.323153       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.337407475 request="vm_update" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="detach_disk" error_code=""
I0413 13:47:42.323198       1 azure_controller_standard.go:201] azureDisk - update(capz-ltahza): vm(capz-ltah-dxbjr) - detach disk list(map[/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourcegroups/capz-ltahza/providers/microsoft.compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf:pvc-4ff38da6-97fc-4575-b44d-113140de8bbf]) returned with <nil>
I0413 13:47:42.323218       1 azure_controller_common.go:365] azureDisk - detach disk(pvc-4ff38da6-97fc-4575-b44d-113140de8bbf, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf) succeeded
I0413 13:47:42.323229       1 controllerserver.go:448] detach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf from node capz-ltah-dxbjr successfully
I0413 13:47:42.323246       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=5.4865538879999995 request="azuredisk_csi_driver_controller_unpublish_volume" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf=capz-ltah-dxbjr result_code="succeeded"
I0413 13:47:42.323256       1 utils.go:84] GRPC response: {}
... skipping 13 lines ...
I0413 13:48:12.580856       1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf"}
I0413 13:48:12.580952       1 controllerserver.go:294] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf)
I0413 13:48:12.614805       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.033808279 request="disks_get" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="" error_code=""
I0413 13:48:16.088057       1 utils.go:77] GRPC call: /csi.v1.Identity/Probe
I0413 13:48:16.088104       1 utils.go:78] GRPC request: {}
I0413 13:48:16.088199       1 utils.go:84] GRPC response: {"ready":{"value":true}}
I0413 13:48:27.581150       1 azure_armclient.go:335] Received error in WaitForCompletionRef: 'Future#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded'
I0413 13:48:27.581271       1 azure_armclient.go:719] Received error in delete.wait: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf, error: %!s(<nil>)
I0413 13:48:27.581337       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=14.966456244 request="disks_delete" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="" error_code=""
I0413 13:48:27.581363       1 controllerserver.go:296] delete azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf) returned with Retriable: true, RetryAfter: 0s, HTTPStatusCode: 0, RawError: Future#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded
I0413 13:48:27.581401       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.000433522 request="azuredisk_csi_driver_controller_delete_volume" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf="" result_code="failed"
E0413 13:48:27.581441       1 utils.go:82] GRPC error: Retriable: true, RetryAfter: 0s, HTTPStatusCode: 0, RawError: Future#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded
I0413 13:48:46.087738       1 utils.go:77] GRPC call: /csi.v1.Identity/Probe
I0413 13:48:46.087787       1 utils.go:78] GRPC request: {}
I0413 13:48:46.087858       1 utils.go:84] GRPC response: {"ready":{"value":true}}
I0413 13:49:16.087234       1 utils.go:77] GRPC call: /csi.v1.Identity/Probe
I0413 13:49:16.087261       1 utils.go:78] GRPC request: {}
I0413 13:49:16.087316       1 utils.go:84] GRPC response: {"ready":{"value":true}}
I0413 13:49:31.582572       1 utils.go:77] GRPC call: /csi.v1.Controller/DeleteVolume
I0413 13:49:31.582601       1 utils.go:78] GRPC request: {"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf"}
I0413 13:49:31.582710       1 controllerserver.go:294] deleting azure disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf)
I0413 13:49:31.639000       1 azure_diskclient.go:138] Received error in disk.get.request: resourceID: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf, error: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 404, RawError: {
  "error": {
    "code": "NotFound",
    "message": "Disk pvc-4ff38da6-97fc-4575-b44d-113140de8bbf is not found."
  }
}
I0413 13:49:31.639320       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.056388148 request="disks_get" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="" error_code="NotFound"
I0413 13:49:31.639360       1 azure_managedDiskController.go:274] azureDisk - disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/pvc-4ff38da6-97fc-4575-b44d-113140de8bbf) is already deleted
... skipping 374 lines ...
I0413 13:59:06.930904       1 azure_controller_common.go:365] azureDisk - detach disk(reattach-disk-multiple-nodes, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/reattach-disk-multiple-nodes) succeeded
I0413 13:59:06.930920       1 controllerserver.go:393] Trying to attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/reattach-disk-multiple-nodes to node capz-ltah-dxbjr again
I0413 13:59:06.930929       1 azure_vmss_cache.go:332] Node capz-ltah-dxbjr has joined the cluster since the last VM cache refresh, refreshing the cache
I0413 13:59:06.970199       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.039245342 request="vm_list" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="" error_code=""
I0413 13:59:06.970225       1 azure_backoff.go:101] VirtualMachinesClient.List(capz-ltahza) success
I0413 13:59:06.970268       1 azure_controller_common.go:174] found dangling volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/reattach-disk-multiple-nodes attached to node capz-ltah-58kzv, could not be attached to node(capz-ltah-dxbjr)
E0413 13:59:06.970292       1 controllerserver.go:397] Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/reattach-disk-multiple-nodes to instance capz-ltah-dxbjr failed with disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/reattach-disk-multiple-nodes) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/virtualMachines/capz-ltah-58kzv), could not be attached to node(capz-ltah-dxbjr)
I0413 13:59:06.970389       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=15.738566691 request="azuredisk_csi_driver_controller_publish_volume" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/reattach-disk-multiple-nodes=capz-ltah-dxbjr result_code="failed"
E0413 13:59:06.970413       1 utils.go:82] GRPC error: Attach volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/reattach-disk-multiple-nodes to instance capz-ltah-dxbjr failed with disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/reattach-disk-multiple-nodes) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/virtualMachines/capz-ltah-58kzv), could not be attached to node(capz-ltah-dxbjr)
I0413 13:59:06.983794       1 utils.go:77] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I0413 13:59:06.983817       1 utils.go:78] GRPC request: {"node_id":"capz-ltah-dxbjr","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"cachingMode":"None","requestedsizegib":"10","skuName":"Premium_LRS"},"volume_id":"/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/reattach-disk-multiple-nodes"}
I0413 13:59:07.007113       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.023112649 request="disks_get" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="" error_code=""
I0413 13:59:07.007148       1 controllerserver.go:350] GetDiskLun returned: <nil>. Initiating attaching volume /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ltahza/providers/Microsoft.Compute/disks/reattach-disk-multiple-nodes to node capz-ltah-dxbjr.
I0413 13:59:07.007159       1 azure_vmss_cache.go:332] Node capz-ltah-dxbjr has joined the cluster since the last VM cache refresh, refreshing the cache
I0413 13:59:07.054248       1 azure_metrics.go:112] "Observed Request Latency" latency_seconds=0.047038791 request="vm_list" resource_group="capz-ltahza" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="" error_code=""
... skipping 167 lines ...
Go Version: go1.18
Platform: windows/amd64
Topology Key: topology.disk.csi.azure.com/zone

Streaming logs below:
I0413 13:00:56.486688    2664 azuredisk.go:165] driver userAgent: disk.csi.azure.com/v1.16.0-eb7e0710ddbc5984b82c0c8453e3a2427fa2a635 gc/go1.18 (amd64-windows) e2e-test
W0413 13:00:56.488466    2664 azure_disk_utils.go:226] get kubeconfig() failed with error: open /var/run/secrets/kubernetes.io/serviceaccount/token: The system cannot find the path specified.
I0413 13:00:56.488466    2664 azure_disk_utils.go:170] could not read cloud config from secret kube-system/azure-cloud-provider
I0413 13:00:56.488466    2664 azure_disk_utils.go:180] use default AZURE_CREDENTIAL_FILE env var: C:\k\azure.json
I0413 13:00:56.489025    2664 azure_disk_utils.go:188] read cloud config from file: C:\k\azure.json successfully
I0413 13:00:56.490970    2664 azure_auth.go:245] Using AzurePublicCloud environment
I0413 13:00:56.490970    2664 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0413 13:00:56.493491    2664 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 19 lines ...
I0413 13:00:56.870632    2664 utils.go:84] GRPC response: {"name":"disk.csi.azure.com","vendor_version":"v1.16.0-eb7e0710ddbc5984b82c0c8453e3a2427fa2a635"}
I0413 13:00:57.135759    2664 utils.go:77] GRPC call: /csi.v1.Identity/GetPluginInfo
I0413 13:00:57.135759    2664 utils.go:78] GRPC request: {}
I0413 13:00:57.136624    2664 utils.go:84] GRPC response: {"name":"disk.csi.azure.com","vendor_version":"v1.16.0-eb7e0710ddbc5984b82c0c8453e3a2427fa2a635"}
I0413 13:00:57.538432    2664 utils.go:77] GRPC call: /csi.v1.Node/NodeGetInfo
I0413 13:00:57.538432    2664 utils.go:78] GRPC request: {}
I0413 13:00:57.561043    2664 azure_instance_metadata.go:138] Warning: failed to get loadbalancer metadata: failure of getting loadbalancer metadata with response "404 Not Found"
I0413 13:00:57.563105    2664 nodeserver.go:372] NodeGetInfo: nodeName(capz-ltah-58kzv), VM Size(Standard_D4s_v3)
I0413 13:00:57.563105    2664 nodeserver.go:410] got a matching size in getMaxDataDiskCount, VM Size: STANDARD_D4S_V3, MaxDataDiskCount: 8
I0413 13:00:57.563105    2664 utils.go:84] GRPC response: {"accessible_topology":{"segments":{"topology.disk.csi.azure.com/zone":""}},"max_volumes_per_node":8,"node_id":"capz-ltah-58kzv"}
I0413 13:01:45.894967    2664 utils.go:77] GRPC call: /csi.v1.Identity/Probe
I0413 13:01:45.894967    2664 utils.go:78] GRPC request: {}
I0413 13:01:45.894967    2664 utils.go:84] GRPC response: {"ready":{"value":true}}
... skipping 370 lines ...
Go Version: go1.18
Platform: windows/amd64
Topology Key: topology.disk.csi.azure.com/zone

Streaming logs below:
I0413 13:00:56.526009    2156 azuredisk.go:165] driver userAgent: disk.csi.azure.com/v1.16.0-eb7e0710ddbc5984b82c0c8453e3a2427fa2a635 gc/go1.18 (amd64-windows) e2e-test
W0413 13:00:56.528376    2156 azure_disk_utils.go:226] get kubeconfig() failed with error: open /var/run/secrets/kubernetes.io/serviceaccount/token: The system cannot find the path specified.
I0413 13:00:56.528987    2156 azure_disk_utils.go:170] could not read cloud config from secret kube-system/azure-cloud-provider
I0413 13:00:56.529033    2156 azure_disk_utils.go:180] use default AZURE_CREDENTIAL_FILE env var: C:\k\azure.json
I0413 13:00:56.529127    2156 azure_disk_utils.go:188] read cloud config from file: C:\k\azure.json successfully
I0413 13:00:56.531294    2156 azure_auth.go:245] Using AzurePublicCloud environment
I0413 13:00:56.531884    2156 azure_auth.go:130] azure: using client_id+client_secret to retrieve access token
I0413 13:00:56.533797    2156 azure_diskclient.go:67] Azure DisksClient using API version: 2021-04-01
... skipping 19 lines ...
I0413 13:00:56.795287    2156 utils.go:84] GRPC response: {"name":"disk.csi.azure.com","vendor_version":"v1.16.0-eb7e0710ddbc5984b82c0c8453e3a2427fa2a635"}
I0413 13:00:57.008475    2156 utils.go:77] GRPC call: /csi.v1.Identity/GetPluginInfo
I0413 13:00:57.008475    2156 utils.go:78] GRPC request: {}
I0413 13:00:57.008475    2156 utils.go:84] GRPC response: {"name":"disk.csi.azure.com","vendor_version":"v1.16.0-eb7e0710ddbc5984b82c0c8453e3a2427fa2a635"}
I0413 13:00:57.593923    2156 utils.go:77] GRPC call: /csi.v1.Node/NodeGetInfo
I0413 13:00:57.593923    2156 utils.go:78] GRPC request: {}
I0413 13:00:57.612834    2156 azure_instance_metadata.go:138] Warning: failed to get loadbalancer metadata: failure of getting loadbalancer metadata with response "404 Not Found"
I0413 13:00:57.612834    2156 nodeserver.go:372] NodeGetInfo: nodeName(capz-ltah-dxbjr), VM Size(Standard_D4s_v3)
I0413 13:00:57.612834    2156 nodeserver.go:410] got a matching size in getMaxDataDiskCount, VM Size: STANDARD_D4S_V3, MaxDataDiskCount: 8
I0413 13:00:57.612834    2156 utils.go:84] GRPC response: {"accessible_topology":{"segments":{"topology.disk.csi.azure.com/zone":""}},"max_volumes_per_node":8,"node_id":"capz-ltah-dxbjr"}
I0413 13:01:22.623815    2156 utils.go:77] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0413 13:01:22.623815    2156 utils.go:78] GRPC request: {}
I0413 13:01:22.624342    2156 utils.go:84] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":5}}}]}
... skipping 1787 lines ...
cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="capz-ltahza",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="300"} 29
cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="capz-ltahza",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="600"} 29
cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="capz-ltahza",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="1200"} 29
cloudprovider_azure_op_duration_seconds_bucket{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="capz-ltahza",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e",le="+Inf"} 29
cloudprovider_azure_op_duration_seconds_sum{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="capz-ltahza",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 544.0969824729999
cloudprovider_azure_op_duration_seconds_count{request="azuredisk_csi_driver_controller_unpublish_volume",resource_group="capz-ltahza",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 29
# HELP cloudprovider_azure_op_failure_count [ALPHA] Number of failed Azure service operations
# TYPE cloudprovider_azure_op_failure_count counter
cloudprovider_azure_op_failure_count{request="azuredisk_csi_driver_controller_delete_volume",resource_group="capz-ltahza",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 2
cloudprovider_azure_op_failure_count{request="azuredisk_csi_driver_controller_publish_volume",resource_group="capz-ltahza",source="disk.csi.azure.com",subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e"} 1
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.78e-05
... skipping 198 lines ...

JUnit report was created: /logs/artifacts/junit_01.xml


Summarizing 1 Failure:

[Fail] Dynamic Provisioning [single-az] [It] should create a volume on demand with mount options [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] 
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:807

Ran 21 of 59 Specs in 3810.418 seconds
FAIL! -- 20 Passed | 1 Failed | 0 Pending | 38 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 5 lines ...
  If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711
  Learn more at: https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.5

--- FAIL: TestE2E (3810.44s)
FAIL
FAIL	sigs.k8s.io/azuredisk-csi-driver/test/e2e	3810.497s
FAIL
make: *** [Makefile:260: e2e-test] Error 1
make: Nothing to be done for 'kubectl'.
================ DUMPING LOGS FOR MANAGEMENT CLUSTER ================
Exported logs for cluster "capz" to:
/logs/artifacts/management-cluster
================ DUMPING LOGS FOR WORKLOAD CLUSTER (Linux) ==========
Deploying log-dump-daemonset
... skipping 93 lines ...