This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandyzhangx: add migration flag in Azure volume CSI migration logic
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2022-06-22 03:54
Elapsed44m54s
Revision0c87093592273929413bf028971f92dc9c920f69
Refs 108317

Test Failures


AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows] 43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AzureFile\sCSI\sDriver\sEnd\-to\-End\sTests\sDynamic\sProvisioning\sshould\screate\sa\svolume\son\sdemand\sand\sresize\sit\s\[kubernetes\.io\/azure\-file\]\s\[file\.csi\.azure\.com\]\s\[Windows\]$'
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:356
Jun 22 04:35:32.583: newPVCSize(11Gi) is not equal to newPVSize(10GiGi)
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 81 lines ...
/home/prow/go/src/k8s.io/kubernetes /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   154  100   154    0     0   6416      0 --:--:-- --:--:-- --:--:--  6416

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    33  100    33    0     0    253      0 --:--:-- --:--:-- --:--:--   343
Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.25.0-alpha.1.67_a3dc67c38b3609 not found: manifest unknown: manifest tagged by "v1.25.0-alpha.1.67_a3dc67c38b3609" is not found
Building Kubernetes
make: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
+++ [0622 03:54:42] Verifying Prerequisites....
+++ [0622 03:54:42] Building Docker image kube-build:build-ba698cbd17-5-v1.25.0-go1.18.3-bullseye.0
+++ [0622 03:57:40] Creating data container kube-build-data-ba698cbd17-5-v1.25.0-go1.18.3-bullseye.0
+++ [0622 03:58:01] Syncing sources to container
... skipping 746 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Deploy CAPI
curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.4/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f -
namespace/capi-system created
customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created
... skipping 132 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-8bmspd-kubeconfig; do sleep 1; done"
capz-8bmspd-kubeconfig                 cluster.x-k8s.io/secret               1      1s
# Get kubeconfig and store it locally.
kubectl get secrets capz-8bmspd-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-8bmspd-control-plane-r2kww   NotReady   <none>   1s    v1.25.0-alpha.1.67+a3dc67c38b3609
run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-8bmspd-control-plane-r2kww condition met
node/capz-8bmspd-md-0-8949h condition met
... skipping 63 lines ...
Dynamic Provisioning 
  should create a storage account with tags [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:73
STEP: Creating a kubernetes client
Jun 22 04:30:43.631: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azurefile
Jun 22 04:30:43.899: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/06/22 04:30:44 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/06/22 04:30:44 Check successfully
... skipping 43 lines ...
Jun 22 04:31:03.298: INFO: PersistentVolumeClaim pvc-mgvqv found but phase is Pending instead of Bound.
Jun 22 04:31:05.333: INFO: PersistentVolumeClaim pvc-mgvqv found and phase=Bound (20.384094354s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Jun 22 04:31:05.446: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-xbpxs" in namespace "azurefile-2540" to be "Succeeded or Failed"
Jun 22 04:31:05.479: INFO: Pod "azurefile-volume-tester-xbpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 33.074655ms
Jun 22 04:31:07.514: INFO: Pod "azurefile-volume-tester-xbpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068509345s
Jun 22 04:31:09.552: INFO: Pod "azurefile-volume-tester-xbpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10586986s
Jun 22 04:31:11.589: INFO: Pod "azurefile-volume-tester-xbpxs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143235086s
STEP: Saw pod success
Jun 22 04:31:11.589: INFO: Pod "azurefile-volume-tester-xbpxs" satisfied condition "Succeeded or Failed"
Jun 22 04:31:11.589: INFO: deleting Pod "azurefile-2540"/"azurefile-volume-tester-xbpxs"
Jun 22 04:31:11.638: INFO: Pod azurefile-volume-tester-xbpxs has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-xbpxs in namespace azurefile-2540
Jun 22 04:31:11.686: INFO: deleting PVC "azurefile-2540"/"pvc-mgvqv"
Jun 22 04:31:11.686: INFO: Deleting PersistentVolumeClaim "pvc-mgvqv"
... skipping 155 lines ...
Jun 22 04:32:58.902: INFO: PersistentVolumeClaim pvc-bpjkn found but phase is Pending instead of Bound.
Jun 22 04:33:00.936: INFO: PersistentVolumeClaim pvc-bpjkn found and phase=Bound (20.383023007s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Jun 22 04:33:01.037: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-z7szn" in namespace "azurefile-2790" to be "Error status code"
Jun 22 04:33:01.070: INFO: Pod "azurefile-volume-tester-z7szn": Phase="Pending", Reason="", readiness=false. Elapsed: 33.819364ms
Jun 22 04:33:03.106: INFO: Pod "azurefile-volume-tester-z7szn": Phase="Running", Reason="", readiness=true. Elapsed: 2.069684004s
Jun 22 04:33:05.143: INFO: Pod "azurefile-volume-tester-z7szn": Phase="Running", Reason="", readiness=false. Elapsed: 4.106164444s
Jun 22 04:33:07.180: INFO: Pod "azurefile-volume-tester-z7szn": Phase="Failed", Reason="", readiness=false. Elapsed: 6.143112447s
STEP: Saw pod failure
Jun 22 04:33:07.180: INFO: Pod "azurefile-volume-tester-z7szn" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Jun 22 04:33:07.224: INFO: deleting Pod "azurefile-2790"/"azurefile-volume-tester-z7szn"
Jun 22 04:33:07.259: INFO: Pod azurefile-volume-tester-z7szn has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-z7szn in namespace azurefile-2790
Jun 22 04:33:07.313: INFO: deleting PVC "azurefile-2790"/"pvc-bpjkn"
... skipping 179 lines ...
Jun 22 04:34:56.200: INFO: PersistentVolumeClaim pvc-nwjdm found but phase is Pending instead of Bound.
Jun 22 04:34:58.233: INFO: PersistentVolumeClaim pvc-nwjdm found and phase=Bound (2.13587352s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Jun 22 04:34:58.338: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-s8qgf" in namespace "azurefile-4538" to be "Succeeded or Failed"
Jun 22 04:34:58.371: INFO: Pod "azurefile-volume-tester-s8qgf": Phase="Pending", Reason="", readiness=false. Elapsed: 32.803854ms
Jun 22 04:35:00.407: INFO: Pod "azurefile-volume-tester-s8qgf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068800615s
Jun 22 04:35:02.443: INFO: Pod "azurefile-volume-tester-s8qgf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10500298s
STEP: Saw pod success
Jun 22 04:35:02.443: INFO: Pod "azurefile-volume-tester-s8qgf" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
Jun 22 04:35:32.583: FAIL: newPVCSize(11Gi) is not equal to newPVSize(10GiGi)

Full Stack Trace
sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.10()
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380 +0x25c
sigs.k8s.io/azurefile-csi-driver/test/e2e.TestE2E(0x0?)
	/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:239 +0x11f
... skipping 22 lines ...
Jun 22 04:35:37.903: INFO: At 2022-06-22 04:34:58 +0000 UTC - event for azurefile-volume-tester-s8qgf: {default-scheduler } Scheduled: Successfully assigned azurefile-4538/azurefile-volume-tester-s8qgf to capz-8bmspd-md-0-8949h
Jun 22 04:35:37.903: INFO: At 2022-06-22 04:34:59 +0000 UTC - event for azurefile-volume-tester-s8qgf: {kubelet capz-8bmspd-md-0-8949h} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine
Jun 22 04:35:37.903: INFO: At 2022-06-22 04:34:59 +0000 UTC - event for azurefile-volume-tester-s8qgf: {kubelet capz-8bmspd-md-0-8949h} Created: Created container volume-tester
Jun 22 04:35:37.903: INFO: At 2022-06-22 04:34:59 +0000 UTC - event for azurefile-volume-tester-s8qgf: {kubelet capz-8bmspd-md-0-8949h} Started: Started container volume-tester
Jun 22 04:35:37.903: INFO: At 2022-06-22 04:35:02 +0000 UTC - event for pvc-nwjdm: {volume_expand } ExternalExpanding: CSI migration enabled for kubernetes.io/azure-file; waiting for external resizer to expand the pvc
Jun 22 04:35:37.903: INFO: At 2022-06-22 04:35:02 +0000 UTC - event for pvc-nwjdm: {external-resizer file.csi.azure.com } Resizing: External resizer is resizing volume pvc-ba8c9a2c-bdcd-4f20-917b-3a7e5a52de39
Jun 22 04:35:37.903: INFO: At 2022-06-22 04:35:02 +0000 UTC - event for pvc-nwjdm: {external-resizer file.csi.azure.com } VolumeResizeFailed: resize volume "pvc-ba8c9a2c-bdcd-4f20-917b-3a7e5a52de39" by resizer "file.csi.azure.com" failed: rpc error: code = Unimplemented desc = vhd disk volume(capz-8bmspd#fdd3e112d80284e6f924836#pvc-ba8c9a2c-bdcd-4f20-917b-3a7e5a52de39#pvc-ba8c9a2c-bdcd-4f20-917b-3a7e5a52de39#azurefile-4538) is not supported on ControllerExpandVolume
Jun 22 04:35:37.936: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jun 22 04:35:37.936: INFO: 
Jun 22 04:35:37.983: INFO: 
Logging node info for node capz-8bmspd-control-plane-r2kww
Jun 22 04:35:38.020: INFO: Node Info: &Node{ObjectMeta:{capz-8bmspd-control-plane-r2kww    ce408d3b-654e-498f-b0d6-256d99e5160f 2200 0 2022-06-22 04:26:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus2 failure-domain.beta.kubernetes.io/zone:eastus2-1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-8bmspd-control-plane-r2kww kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus2 topology.kubernetes.io/zone:eastus2-1] map[cluster.x-k8s.io/cluster-name:capz-8bmspd cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-8bmspd-control-plane-tbwzq cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-8bmspd-control-plane csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-8bmspd-control-plane-r2kww"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.36.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2022-06-22 04:26:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-06-22 04:26:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-06-22 04:27:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-06-22 04:27:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-06-22 04:27:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-06-22 04:34:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-8bmspd/providers/Microsoft.Compute/virtualMachines/capz-8bmspd-control-plane-r2kww,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-22 04:27:17 +0000 UTC,LastTransitionTime:2022-06-22 04:27:17 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-22 04:34:25 +0000 UTC,LastTransitionTime:2022-06-22 04:26:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-22 04:34:25 +0000 UTC,LastTransitionTime:2022-06-22 04:26:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-22 04:34:25 +0000 UTC,LastTransitionTime:2022-06-22 04:26:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-22 04:34:25 +0000 UTC,LastTransitionTime:2022-06-22 04:27:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-8bmspd-control-plane-r2kww,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eba24c9499da4e1399d618f58a6d9927,SystemUUID:863141b8-d888-3743-9484-b0cd445bd263,BootID:fec72c67-4d77-47f2-b8ab-c5c76f3c70c8,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.67+a3dc67c38b3609,KubeProxyVersion:v1.25.0-alpha.1.67+a3dc67c38b3609,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.25.0-alpha.1.65_3beb8dc5967801 k8s.gcr.io/kube-proxy:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:39501134,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:1fd411e34636f0d08820f0c39c8a0c0aa7b04e4e989f0942f1390805e66fadbf capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.67_a3dc67c38b3609],SizeBytes:39499404,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.25.0-alpha.1.65_3beb8dc5967801 k8s.gcr.io/kube-apiserver:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:33779236,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.25.0-alpha.1.65_3beb8dc5967801 k8s.gcr.io/kube-controller-manager:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:31010080,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.25.0-alpha.1.65_3beb8dc5967801 k8s.gcr.io/kube-scheduler:v1.25.0-alpha.1.65_3beb8dc5967801],SizeBytes:15533645,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 22 04:35:38.020: INFO: 
... skipping 780 lines ...

JUnit report was created: /logs/artifacts/junit_01.xml


Summarizing 1 Failure:

[Fail] Dynamic Provisioning [It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows] 
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380

Ran 6 of 34 Specs in 309.745 seconds
FAIL! -- 5 Passed | 1 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 5 lines ...
  If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711
  Learn more at: https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.5

--- FAIL: TestE2E (309.75s)
FAIL
FAIL	sigs.k8s.io/azurefile-csi-driver/test/e2e	309.815s
FAIL
make: *** [Makefile:85: e2e-test] Error 1
NAME                              STATUS   ROLES           AGE     VERSION                             INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
capz-8bmspd-control-plane-r2kww   Ready    control-plane   9m11s   v1.25.0-alpha.1.67+a3dc67c38b3609   10.0.0.4      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
capz-8bmspd-md-0-8949h            Ready    <none>          7m42s   v1.25.0-alpha.1.67+a3dc67c38b3609   10.1.0.4      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
capz-8bmspd-md-0-wdhck            Ready    <none>          7m41s   v1.25.0-alpha.1.67+a3dc67c38b3609   10.1.0.5      <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.6.2
NAMESPACE     NAME                                                      READY   STATUS    RESTARTS   AGE     IP               NODE                              NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-57cb778775-n42bq                  1/1     Running   0          9m3s    192.168.36.4     capz-8bmspd-control-plane-r2kww   <none>           <none>
... skipping 93 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-vkj6f, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-8bmspd-control-plane-r2kww, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-proxy-vkj6f
STEP: Collecting events for Pod kube-system/kube-proxy-br9vb
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-8bmspd-control-plane-r2kww, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-8bmspd-control-plane-r2kww
STEP: failed to find events of Pod "kube-controller-manager-capz-8bmspd-control-plane-r2kww"
STEP: failed to find events of Pod "kube-apiserver-capz-8bmspd-control-plane-r2kww"
STEP: failed to find events of Pod "kube-scheduler-capz-8bmspd-control-plane-r2kww"
STEP: Error starting logs stream for pod kube-system/kube-scheduler-capz-8bmspd-control-plane-r2kww, container kube-scheduler: container "kube-scheduler" in pod "kube-scheduler-capz-8bmspd-control-plane-r2kww" is not available
STEP: Error starting logs stream for pod kube-system/kube-apiserver-capz-8bmspd-control-plane-r2kww, container kube-apiserver: container "kube-apiserver" in pod "kube-apiserver-capz-8bmspd-control-plane-r2kww" is not available
STEP: Error starting logs stream for pod kube-system/kube-controller-manager-capz-8bmspd-control-plane-r2kww, container kube-controller-manager: container "kube-controller-manager" in pod "kube-controller-manager-capz-8bmspd-control-plane-r2kww" is not available
STEP: Fetching activity logs took 959.090621ms
================ REDACTING LOGS ================
All sensitive variables are redacted
make: Entering directory '/home/prow/go/src/k8s.io/kubernetes'
+++ [0622 04:38:28] Verifying Prerequisites....
+++ [0622 04:38:32] Removing _output directory
... skipping 12 lines ...