Recent runs || View in Spyglass
PR | yingchunliu-zte: unmountVolumes check shouldPodRuntimeBeRemoved |
Result | FAILURE |
Tests | 1 failed / 5 succeeded |
Started | |
Elapsed | 44m7s |
Revision | c1f77b354e7291936d44bc81a71a05c46b5cc08c |
Refs |
110682 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AzureFile\sCSI\sDriver\sEnd\-to\-End\sTests\sDynamic\sProvisioning\sshould\screate\sa\svolume\son\sdemand\sand\sresize\sit\s\[kubernetes\.io\/azure\-file\]\s\[file\.csi\.azure\.com\]\s\[Windows\]$'
/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:356 Jun 21 22:30:39.928: newPVCSize(11Gi) is not equal to newPVSize(10GiGi) /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380from junit_01.xml
�[1mSTEP�[0m: Creating a kubernetes client Jun 21 22:30:01.322: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename azurefile �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace Jun 21 22:30:02.468: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig �[1mSTEP�[0m: setting up the StorageClass �[1mSTEP�[0m: creating a StorageClass �[1mSTEP�[0m: setting up the PVC and PV �[1mSTEP�[0m: creating a PVC �[1mSTEP�[0m: waiting for PVC to be in phase "Bound" Jun 21 22:30:02.680: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-qz9kx] to have phase Bound Jun 21 22:30:02.781: INFO: PersistentVolumeClaim pvc-qz9kx found but phase is Pending instead of Bound. Jun 21 22:30:04.884: INFO: PersistentVolumeClaim pvc-qz9kx found and phase=Bound (2.204426994s) �[1mSTEP�[0m: checking the PVC �[1mSTEP�[0m: validating provisioned PV �[1mSTEP�[0m: checking the PV �[1mSTEP�[0m: deploying the pod �[1mSTEP�[0m: checking that the pods command exits with no error Jun 21 22:30:05.191: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-bqxpl" in namespace "azurefile-2546" to be "Succeeded or Failed" Jun 21 22:30:05.292: INFO: Pod "azurefile-volume-tester-bqxpl": Phase="Pending", Reason="", readiness=false. Elapsed: 101.899868ms Jun 21 22:30:07.400: INFO: Pod "azurefile-volume-tester-bqxpl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209824544s Jun 21 22:30:09.510: INFO: Pod "azurefile-volume-tester-bqxpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.319033579s �[1mSTEP�[0m: Saw pod success Jun 21 22:30:09.510: INFO: Pod "azurefile-volume-tester-bqxpl" satisfied condition "Succeeded or Failed" �[1mSTEP�[0m: resizing the pvc �[1mSTEP�[0m: sleep 30s waiting for resize complete �[1mSTEP�[0m: checking the resizing result �[1mSTEP�[0m: checking the resizing PV result Jun 21 22:30:39.928: FAIL: newPVCSize(11Gi) is not equal to newPVSize(10GiGi) Full Stack Trace sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.10() /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380 +0x25c sigs.k8s.io/azurefile-csi-driver/test/e2e.TestE2E(0x0?) /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:239 +0x11f testing.tRunner(0xc000582680, 0x21350e8) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f Jun 21 22:30:39.928: INFO: deleting Pod "azurefile-2546"/"azurefile-volume-tester-bqxpl" Jun 21 22:30:40.045: INFO: Pod azurefile-volume-tester-bqxpl has the following logs: hello world �[1mSTEP�[0m: Deleting pod azurefile-volume-tester-bqxpl in namespace azurefile-2546 Jun 21 22:30:40.162: INFO: deleting PVC "azurefile-2546"/"pvc-qz9kx" Jun 21 22:30:40.162: INFO: Deleting PersistentVolumeClaim "pvc-qz9kx" �[1mSTEP�[0m: waiting for claim's PV "pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1" to be deleted Jun 21 22:30:40.266: INFO: Waiting up to 10m0s for PersistentVolume pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1 to get deleted Jun 21 22:30:40.368: INFO: PersistentVolume pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1 found and phase=Released (102.24343ms) Jun 21 22:30:45.473: INFO: PersistentVolume pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1 was removed Jun 21 22:30:45.473: INFO: Waiting up to 5m0s for PersistentVolumeClaim azurefile-2546 to be removed Jun 21 22:30:45.575: INFO: Claim "azurefile-2546" in namespace "pvc-qz9kx" doesn't exist in the system Jun 21 22:30:45.575: INFO: deleting StorageClass azurefile-2546-kubernetes.io-azure-file-dynamic-sc-rbm4n �[1mSTEP�[0m: Collecting events from namespace "azurefile-2546". �[1mSTEP�[0m: Found 10 events. Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:02 +0000 UTC - event for pvc-qz9kx: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "file.csi.azure.com" or manually created by system administrator Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:02 +0000 UTC - event for pvc-qz9kx: {file.csi.azure.com_capz-qx7od9-mp-0000001_8651d885-4c5b-452f-9147-6041f15d1f6e } Provisioning: External provisioner is provisioning volume for claim "azurefile-2546/pvc-qz9kx" Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:03 +0000 UTC - event for pvc-qz9kx: {file.csi.azure.com_capz-qx7od9-mp-0000001_8651d885-4c5b-452f-9147-6041f15d1f6e } ProvisioningSucceeded: Successfully provisioned volume pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1 Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:05 +0000 UTC - event for azurefile-volume-tester-bqxpl: {default-scheduler } Scheduled: Successfully assigned azurefile-2546/azurefile-volume-tester-bqxpl to capz-qx7od9-mp-0000001 Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:06 +0000 UTC - event for azurefile-volume-tester-bqxpl: {kubelet capz-qx7od9-mp-0000001} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:06 +0000 UTC - event for azurefile-volume-tester-bqxpl: {kubelet capz-qx7od9-mp-0000001} Created: Created container volume-tester Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:06 +0000 UTC - event for azurefile-volume-tester-bqxpl: {kubelet capz-qx7od9-mp-0000001} Started: Started container volume-tester Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:09 +0000 UTC - event for pvc-qz9kx: {volume_expand } ExternalExpanding: CSI migration enabled for kubernetes.io/azure-file; waiting for external resizer to expand the pvc Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:09 +0000 UTC - event for pvc-qz9kx: {external-resizer file.csi.azure.com } Resizing: External resizer is resizing volume pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1 Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:09 +0000 UTC - event for pvc-qz9kx: {external-resizer file.csi.azure.com } VolumeResizeFailed: resize volume "pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1" by resizer "file.csi.azure.com" failed: rpc error: code = Unimplemented desc = vhd disk volume(capz-qx7od9#f0a7fe6cb036841ae869516#pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1#pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1#azurefile-2546) is not supported on ControllerExpandVolume Jun 21 22:30:45.895: INFO: POD NODE PHASE GRACE CONDITIONS Jun 21 22:30:45.895: INFO: Jun 21 22:30:46.043: INFO: Logging node info for node capz-qx7od9-control-plane-cs87r Jun 21 22:30:46.156: INFO: Node Info: &Node{ObjectMeta:{capz-qx7od9-control-plane-cs87r b91cfba9-f252-4a0f-8bd7-1e6713bdafea 2248 0 2022-06-21 22:21:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:uksouth failure-domain.beta.kubernetes.io/zone:uksouth-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-qx7od9-control-plane-cs87r kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:uksouth topology.kubernetes.io/zone:uksouth-2] map[cluster.x-k8s.io/cluster-name:capz-qx7od9 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-qx7od9-control-plane-cmgv4 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-qx7od9-control-plane csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-qx7od9-control-plane-cs87r"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.111.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-06-21 22:21:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-06-21 22:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-06-21 22:21:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-06-21 22:21:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-06-21 22:21:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-06-21 22:29:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-qx7od9/providers/Microsoft.Compute/virtualMachines/capz-qx7od9-control-plane-cs87r,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-21 22:21:43 +0000 UTC,LastTransitionTime:2022-06-21 22:21:43 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-21 22:29:19 +0000 UTC,LastTransitionTime:2022-06-21 22:20:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-21 22:29:19 +0000 UTC,LastTransitionTime:2022-06-21 22:20:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-21 22:29:19 +0000 UTC,LastTransitionTime:2022-06-21 22:20:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-21 22:29:19 +0000 UTC,LastTransitionTime:2022-06-21 22:21:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-qx7od9-control-plane-cs87r,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:db783d92dfe146a4a12f52ae78a4d762,SystemUUID:2520b718-88f8-d840-8ca3-58ee64b06d04,BootID:c48a5460-788e-4aca-b292-f32c2ed361ac,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.65+74cff58e7e74a4,KubeProxyVersion:v1.25.0-alpha.1.65+74cff58e7e74a4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.25.0-alpha.1.59_1ceca7b139e141 k8s.gcr.io/kube-proxy:v1.25.0-alpha.1.59_1ceca7b139e141],SizeBytes:39501121,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:63ac248c3589981554b9dc7356aa74f16569bf5e75b7af5db281496aeea1d2b7 capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.65_74cff58e7e74a4],SizeBytes:39499263,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.25.0-alpha.1.59_1ceca7b139e141 k8s.gcr.io/kube-apiserver:v1.25.0-alpha.1.59_1ceca7b139e141],SizeBytes:33779251,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.25.0-alpha.1.59_1ceca7b139e141 k8s.gcr.io/kube-controller-manager:v1.25.0-alpha.1.59_1ceca7b139e141],SizeBytes:31010648,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.25.0-alpha.1.59_1ceca7b139e141 k8s.gcr.io/kube-scheduler:v1.25.0-alpha.1.59_1ceca7b139e141],SizeBytes:15533663,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 21 22:30:46.157: INFO: Logging kubelet events for node capz-qx7od9-control-plane-cs87r Jun 21 22:30:46.261: INFO: Logging pods the kubelet thinks is on node capz-qx7od9-control-plane-cs87r Jun 21 22:30:46.516: INFO: kube-controller-manager-capz-qx7od9-control-plane-cs87r started at <nil> (0+0 container statuses recorded) Jun 21 22:30:46.516: INFO: kube-scheduler-capz-qx7od9-control-plane-cs87r started at <nil> (0+0 container statuses recorded) Jun 21 22:30:46.516: INFO: coredns-8c797478b-4kxdz started at 2022-06-21 22:21:40 +0000 UTC (0+1 container statuses recorded) Jun 21 22:30:46.516: INFO: Container coredns ready: true, restart count 0 Jun 21 22:30:46.516: INFO: csi-azurefile-node-nx8wl started at 2022-06-21 22:23:37 +0000 UTC (0+3 container statuses recorded) Jun 21 22:30:46.516: INFO: Container azurefile ready: true, restart count 0 Jun 21 22:30:46.516: INFO: Container liveness-probe ready: true, restart count 0 Jun 21 22:30:46.516: INFO: Container node-driver-registrar ready: true, restart count 0 Jun 21 22:30:46.516: INFO: etcd-capz-qx7od9-control-plane-cs87r started at 2022-06-21 22:21:15 +0000 UTC (0+1 container statuses recorded) Jun 21 22:30:46.516: INFO: Container etcd ready: true, restart count 0 Jun 21 22:30:46.516: INFO: kube-apiserver-capz-qx7od9-control-plane-cs87r started at <nil> (0+0 container statuses recorded) Jun 21 22:30:46.516: INFO: kube-proxy-tvds9 started at 2022-06-21 22:21:15 +0000 UTC (0+1 container statuses recorded) Jun 21 22:30:46.516: INFO: Container kube-proxy ready: true, restart count 0 Jun 21 22:30:46.516: INFO: calico-node-l99h2 started at 2022-06-21 22:21:18 +0000 UTC (2+1 container statuses recorded) Jun 21 22:30:46.516: INFO: Init container upgrade-ipam ready: true, restart count 0 Jun 21 22:30:46.516: INFO: Init container install-cni ready: true, restart count 0 Jun 21 22:30:46.516: INFO: Container calico-node ready: true, restart count 0 Jun 21 22:30:46.516: INFO: coredns-8c797478b-xsr8x started at 2022-06-21 22:21:40 +0000 UTC (0+1 container statuses recorded) Jun 21 22:30:46.516: INFO: Container coredns ready: true, restart count 0 Jun 21 22:30:46.516: INFO: calico-kube-controllers-57cb778775-kx42t started at 2022-06-21 22:21:40 +0000 UTC (0+1 container statuses recorded) Jun 21 22:30:46.516: INFO: Container calico-kube-controllers ready: true, restart count 0 Jun 21 22:30:46.895: INFO: Latency metrics for node capz-qx7od9-control-plane-cs87r Jun 21 22:30:46.895: INFO: Logging node info for node capz-qx7od9-mp-0000000 Jun 21 22:30:47.001: INFO: Node Info: &Node{ObjectMeta:{capz-qx7od9-mp-0000000 0dc81290-cf52-43c9-bab9-544ac5c260d6 1666 0 2022-06-21 22:22:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:uksouth failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-qx7od9-mp-0000000 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:uksouth topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-qx7od9 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-qx7od9-mp-0 csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-qx7od9-mp-0000000"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.154.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-06-21 22:22:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-06-21 22:23:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-06-21 22:23:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2022-06-21 22:23:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {manager Update v1 2022-06-21 22:23:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2022-06-21 22:26:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-qx7od9/providers/Microsoft.Compute/virtualMachineScaleSets/capz-qx7od9-mp-0/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036686336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933017657 0} {<nil>} 27933017657 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-21 22:23:30 +0000 UTC,LastTransitionTime:2022-06-21 22:23:30 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-21 22:26:30 +0000 UTC,LastTransitionTime:2022-06-21 22:22:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-21 22:26:30 +0000 UTC,LastTransitionTime:2022-06-21 22:22:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-21 22:26:30 +0000 UTC,LastTransitionTime:2022-06-21 22:22:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-21 22:26:30 +0000 UTC,LastTransitionTime:2022-06-21 22:23:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-qx7od9-mp-0000000,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a5fb2058a73f447fab6c4ef5ccfaf6ce,SystemUUID:108e5ca6-58d2-e141-a4a0-81e9dbfc4c0b,BootID:ca80961e-8ed5-4598-afb1-d7190d92ff13,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.65+74cff58e7e74a4,KubeProxyVersion:v1.25.0-alpha.1.65+74cff58e7e74a4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:429c8476e3acac27b06ff8054fd983c8c5cfd928b84346239517f29efda41874 mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1],SizeBytes:58142722,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:544e74bd67c649fd49500e195ff4a4ee675cfd26768574262dc6fa0250373d59 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0],SizeBytes:57519578,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:7e5af2ed16e053822e58f6576423c0bb77e59050c3698986f319d257b4551023 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0],SizeBytes:56936934,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:63ac248c3589981554b9dc7356aa74f16569bf5e75b7af5db281496aeea1d2b7 capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.65_74cff58e7e74a4],SizeBytes:39499263,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:a889e925e15f9423f7842f1b769f64cbcf6a20b6956122836fc835cf22d9073f mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1],SizeBytes:22192414,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller@sha256:8c3fc3c2667004ad6bbdf723bb64c5da66a5cb8b11d4ee59b67179b686223b13 mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v5.0.1],SizeBytes:21074719,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 21 22:30:47.001: INFO: Logging kubelet events for node capz-qx7od9-mp-0000000 Jun 21 22:30:47.107: INFO: Logging pods the kubelet thinks is on node capz-qx7od9-mp-0000000 Jun 21 22:30:47.255: INFO: calico-node-gmknk started at 2022-06-21 22:22:56 +0000 UTC (2+1 container statuses recorded) Jun 21 22:30:47.255: INFO: Init container upgrade-ipam ready: true, restart count 0 Jun 21 22:30:47.255: INFO: Init container install-cni ready: true, restart count 0 Jun 21 22:30:47.255: INFO: Container calico-node ready: true, restart count 0 Jun 21 22:30:47.255: INFO: csi-azurefile-controller-8565959cf4-nkkpl started at 2022-06-21 22:23:35 +0000 UTC (0+6 container statuses recorded) Jun 21 22:30:47.255: INFO: Container azurefile ready: true, restart count 0 Jun 21 22:30:47.255: INFO: Container csi-attacher ready: true, restart count 0 Jun 21 22:30:47.255: INFO: Container csi-provisioner ready: true, restart count 0 Jun 21 22:30:47.255: INFO: Container csi-resizer ready: true, restart count 0 Jun 21 22:30:47.255: INFO: Container csi-snapshotter ready: true, restart count 0 Jun 21 22:30:47.255: INFO: Container liveness-probe ready: true, restart count 0 Jun 21 22:30:47.255: INFO: csi-azurefile-node-6d9b8 started at 2022-06-21 22:23:36 +0000 UTC (0+3 container statuses recorded) Jun 21 22:30:47.255: INFO: Container azurefile ready: true, restart count 0 Jun 21 22:30:47.255: INFO: Container liveness-probe ready: true, restart count 0 Jun 21 22:30:47.255: INFO: Container node-driver-registrar ready: true, restart count 0 Jun 21 22:30:47.255: INFO: csi-snapshot-controller-789545b454-7dkkw started at 2022-06-21 22:23:43 +0000 UTC (0+1 container statuses recorded) Jun 21 22:30:47.255: INFO: Container csi-snapshot-controller ready: true, restart count 0 Jun 21 22:30:47.255: INFO: kube-proxy-v4lnh started at 2022-06-21 22:22:56 +0000 UTC (0+1 container statuses recorded) Jun 21 22:30:47.255: INFO: Container kube-proxy ready: true, restart count 0 Jun 21 22:30:47.650: INFO: Latency metrics for node capz-qx7od9-mp-0000000 Jun 21 22:30:47.651: INFO: Logging node info for node capz-qx7od9-mp-0000001 Jun 21 22:30:47.760: INFO: Node Info: &Node{ObjectMeta:{capz-qx7od9-mp-0000001 650da3e8-985e-4782-bd13-b69dc7889682 1672 0 2022-06-21 22:22:56 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:uksouth failure-domain.beta.kubernetes.io/zone:1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-qx7od9-mp-0000001 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:uksouth topology.kubernetes.io/zone:1] map[cluster.x-k8s.io/cluster-name:capz-qx7od9 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-qx7od9-mp-0 csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-qx7od9-mp-0000001"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.25.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-06-21 22:22:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-06-21 22:23:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-06-21 22:23:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2022-06-21 22:23:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {manager Update v1 2022-06-21 22:23:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2022-06-21 22:26:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-qx7od9/providers/Microsoft.Compute/virtualMachineScaleSets/capz-qx7od9-mp-0/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036686336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933017657 0} {<nil>} 27933017657 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-21 22:23:31 +0000 UTC,LastTransitionTime:2022-06-21 22:23:31 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-21 22:26:31 +0000 UTC,LastTransitionTime:2022-06-21 22:22:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-21 22:26:31 +0000 UTC,LastTransitionTime:2022-06-21 22:22:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-21 22:26:31 +0000 UTC,LastTransitionTime:2022-06-21 22:22:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-21 22:26:31 +0000 UTC,LastTransitionTime:2022-06-21 22:23:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-qx7od9-mp-0000001,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12093b114b3b4372aee297c861dd5210,SystemUUID:ef5045a9-f004-4748-b948-5f3e56acfd89,BootID:75d6bba1-4da2-4776-af6a-7661587672f6,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.65+74cff58e7e74a4,KubeProxyVersion:v1.25.0-alpha.1.65+74cff58e7e74a4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:429c8476e3acac27b06ff8054fd983c8c5cfd928b84346239517f29efda41874 mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.1.1],SizeBytes:58142722,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:544e74bd67c649fd49500e195ff4a4ee675cfd26768574262dc6fa0250373d59 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0],SizeBytes:57519578,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:7e5af2ed16e053822e58f6576423c0bb77e59050c3698986f319d257b4551023 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0],SizeBytes:56936934,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:63ac248c3589981554b9dc7356aa74f16569bf5e75b7af5db281496aeea1d2b7 capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.65_74cff58e7e74a4],SizeBytes:39499263,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:a889e925e15f9423f7842f1b769f64cbcf6a20b6956122836fc835cf22d9073f mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1],SizeBytes:22192414,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller@sha256:8c3fc3c2667004ad6bbdf723bb64c5da66a5cb8b11d4ee59b67179b686223b13 mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v5.0.1],SizeBytes:21074719,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 21 22:30:47.760: INFO: Logging kubelet events for node capz-qx7od9-mp-0000001 Jun 21 22:30:47.865: INFO: Logging pods the kubelet thinks is on node capz-qx7od9-mp-0000001 Jun 21 22:30:48.006: INFO: csi-azurefile-node-7svxw started at 2022-06-21 22:23:37 +0000 UTC (0+3 container statuses recorded) Jun 21 22:30:48.006: INFO: Container azurefile ready: true, restart count 0 Jun 21 22:30:48.006: INFO: Container liveness-probe ready: true, restart count 0 Jun 21 22:30:48.006: INFO: Container node-driver-registrar ready: true, restart count 0 Jun 21 22:30:48.006: INFO: csi-snapshot-controller-789545b454-x8p9v started at 2022-06-21 22:23:43 +0000 UTC (0+1 container statuses recorded) Jun 21 22:30:48.006: INFO: Container csi-snapshot-controller ready: true, restart count 0 Jun 21 22:30:48.006: INFO: kube-proxy-7bmb2 started at 2022-06-21 22:22:57 +0000 UTC (0+1 container statuses recorded) Jun 21 22:30:48.006: INFO: Container kube-proxy ready: true, restart count 0 Jun 21 22:30:48.006: INFO: calico-node-qz2dq started at 2022-06-21 22:22:57 +0000 UTC (2+1 container statuses recorded) Jun 21 22:30:48.006: INFO: Init container upgrade-ipam ready: true, restart count 0 Jun 21 22:30:48.006: INFO: Init container install-cni ready: true, restart count 0 Jun 21 22:30:48.006: INFO: Container calico-node ready: true, restart count 0 Jun 21 22:30:48.006: INFO: csi-azurefile-controller-8565959cf4-6fjrd started at 2022-06-21 22:23:35 +0000 UTC (0+6 container statuses recorded) Jun 21 22:30:48.006: INFO: Container azurefile ready: true, restart count 0 Jun 21 22:30:48.006: INFO: Container csi-attacher ready: true, restart count 0 Jun 21 22:30:48.006: INFO: Container csi-provisioner ready: true, restart count 0 Jun 21 22:30:48.006: INFO: Container csi-resizer ready: true, restart count 0 Jun 21 22:30:48.006: INFO: Container csi-snapshotter ready: true, restart count 0 Jun 21 22:30:48.006: INFO: Container liveness-probe ready: true, restart count 0 Jun 21 22:30:48.410: INFO: Latency metrics for node capz-qx7od9-mp-0000001 Jun 21 22:30:48.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "azurefile-2546" for this suite.
Filter through log files | View test history on testgrid
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a deployment object, write and read to it, delete the pod and write and read to it again [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand and mount it as readOnly in a pod [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand with mount options [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should delete PV with reclaimPolicy "Delete" [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning [env] should retain PV with reclaimPolicy "Retain" [file.csi.azure.com] [disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a NFS volume on demand on a storage account with private endpoint [file.csi.azure.com] [nfs]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a NFS volume on demand with mount options [file.csi.azure.com] [nfs]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a deployment object, write and read to it, delete the pod and write and read to it again [file.csi.azure.com] [disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a pod with multiple NFS volumes [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a pod with multiple volumes [kubernetes.io/azure-file] [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a pod with volume mount subpath [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a pod, write and read to it, take a volume snapshot, and validate whether it is ready to use [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a statefulset object, write and read to it, delete the pod and write and read to it again [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a storage account with tags [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a vhd disk volume on demand [kubernetes.io/azure-file] [file.csi.azure.com][disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a vhd disk volume on demand and mount it as readOnly in a pod [file.csi.azure.com][disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume after driver restart [kubernetes.io/azure-file] [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand with mount options (Bring Your Own Key) [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create a volume on demand with useDataPlaneAPI [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create an CSI inline volume [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create an inline volume by in-tree driver [kubernetes.io/azure-file]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [file.csi.azure.com][disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should delete PV with reclaimPolicy "Delete" [file.csi.azure.com] [disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should mount on-prem smb server [file.csi.azure.com]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should receive FailedMount event with invalid mount options [file.csi.azure.com] [disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should receive FailedMount event with invalid mount options [file.csi.azure.com] [disk]
AzureFile CSI Driver End-to-End Tests Dynamic Provisioning should retain PV with reclaimPolicy "Retain" [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Pre-Provisioned should use a pre-provisioned volume and mount it as readOnly in a pod [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Pre-Provisioned should use a pre-provisioned volume and mount it by multiple pods [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Pre-Provisioned should use a pre-provisioned volume and retain PV with reclaimPolicy "Retain" [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Pre-Provisioned should use existing credentials in k8s cluster [file.csi.azure.com] [Windows]
AzureFile CSI Driver End-to-End Tests Pre-Provisioned should use provided credentials [file.csi.azure.com] [Windows]
... skipping 81 lines ... /home/prow/go/src/k8s.io/kubernetes /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 154 100 154 0 0 6160 0 --:--:-- --:--:-- --:--:-- 6160 100 33 100 33 0 0 500 0 --:--:-- --:--:-- --:--:-- 500 Error response from daemon: manifest for capzci.azurecr.io/kube-apiserver:v1.25.0-alpha.1.65_74cff58e7e74a4 not found: manifest unknown: manifest tagged by "v1.25.0-alpha.1.65_74cff58e7e74a4" is not found Building Kubernetes make: Entering directory '/home/prow/go/src/k8s.io/kubernetes' +++ [0621 21:50:48] Verifying Prerequisites.... +++ [0621 21:50:48] Building Docker image kube-build:build-0352e5c79d-5-v1.25.0-go1.18.3-bullseye.0 +++ [0621 21:53:39] Creating data container kube-build-data-0352e5c79d-5-v1.25.0-go1.18.3-bullseye.0 +++ [0621 21:53:52] Syncing sources to container ... skipping 745 lines ... certificate.cert-manager.io "selfsigned-cert" deleted # Create secret for AzureClusterIdentity ./hack/create-identity-secret.sh make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: Nothing to be done for 'kubectl'. make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Error from server (NotFound): secrets "cluster-identity-secret" not found secret/cluster-identity-secret created secret/cluster-identity-secret labeled # Deploy CAPI curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.4/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f - namespace/capi-system created customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created ... skipping 201 lines ... [0mPre-Provisioned[0m [1mshould use a pre-provisioned volume and mount it as readOnly in a pod [file.csi.azure.com] [Windows][0m [37m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:77[0m [1mSTEP[0m: Creating a kubernetes client Jun 21 22:25:24.056: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig [1mSTEP[0m: Building a namespace api object, basename azurefile Jun 21 22:25:24.733: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource [1mSTEP[0m: Waiting for a default service account to be provisioned in namespace [1mSTEP[0m: Waiting for kube-root-ca.crt to be provisioned in namespace 2022/06/21 22:25:25 Check driver pods if restarts ... check the driver pods if restarts ... ====================================================================================== 2022/06/21 22:25:25 Check successfully ... skipping 178 lines ... Jun 21 22:25:53.256: INFO: PersistentVolumeClaim pvc-wbbgh found but phase is Pending instead of Bound. Jun 21 22:25:55.359: INFO: PersistentVolumeClaim pvc-wbbgh found and phase=Bound (21.132204516s) [1mSTEP[0m: checking the PVC [1mSTEP[0m: validating provisioned PV [1mSTEP[0m: checking the PV [1mSTEP[0m: deploying the pod [1mSTEP[0m: checking that the pods command exits with no error Jun 21 22:25:55.666: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-cxn89" in namespace "azurefile-5194" to be "Succeeded or Failed" Jun 21 22:25:55.767: INFO: Pod "azurefile-volume-tester-cxn89": Phase="Pending", Reason="", readiness=false. Elapsed: 101.784397ms Jun 21 22:25:57.876: INFO: Pod "azurefile-volume-tester-cxn89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210203747s Jun 21 22:25:59.984: INFO: Pod "azurefile-volume-tester-cxn89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318183562s Jun 21 22:26:02.093: INFO: Pod "azurefile-volume-tester-cxn89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.427301142s [1mSTEP[0m: Saw pod success Jun 21 22:26:02.093: INFO: Pod "azurefile-volume-tester-cxn89" satisfied condition "Succeeded or Failed" Jun 21 22:26:02.093: INFO: deleting Pod "azurefile-5194"/"azurefile-volume-tester-cxn89" Jun 21 22:26:02.209: INFO: Pod azurefile-volume-tester-cxn89 has the following logs: hello world [1mSTEP[0m: Deleting pod azurefile-volume-tester-cxn89 in namespace azurefile-5194 Jun 21 22:26:02.409: INFO: deleting PVC "azurefile-5194"/"pvc-wbbgh" Jun 21 22:26:02.409: INFO: Deleting PersistentVolumeClaim "pvc-wbbgh" ... skipping 155 lines ... Jun 21 22:27:56.105: INFO: PersistentVolumeClaim pvc-s5jgt found but phase is Pending instead of Bound. Jun 21 22:27:58.209: INFO: PersistentVolumeClaim pvc-s5jgt found and phase=Bound (21.137335586s) [1mSTEP[0m: checking the PVC [1mSTEP[0m: validating provisioned PV [1mSTEP[0m: checking the PV [1mSTEP[0m: deploying the pod [1mSTEP[0m: checking that the pods command exits with an error Jun 21 22:27:58.518: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-b9xt5" in namespace "azurefile-156" to be "Error status code" Jun 21 22:27:58.620: INFO: Pod "azurefile-volume-tester-b9xt5": Phase="Pending", Reason="", readiness=false. Elapsed: 102.58605ms Jun 21 22:28:00.730: INFO: Pod "azurefile-volume-tester-b9xt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212143177s Jun 21 22:28:02.839: INFO: Pod "azurefile-volume-tester-b9xt5": Phase="Failed", Reason="", readiness=false. Elapsed: 4.321195456s [1mSTEP[0m: Saw pod failure Jun 21 22:28:02.839: INFO: Pod "azurefile-volume-tester-b9xt5" satisfied condition "Error status code" [1mSTEP[0m: checking that pod logs contain expected message Jun 21 22:28:02.951: INFO: deleting Pod "azurefile-156"/"azurefile-volume-tester-b9xt5" Jun 21 22:28:03.056: INFO: Pod azurefile-volume-tester-b9xt5 has the following logs: touch: /mnt/test-1/data: Read-only file system [1mSTEP[0m: Deleting pod azurefile-volume-tester-b9xt5 in namespace azurefile-156 Jun 21 22:28:03.173: INFO: deleting PVC "azurefile-156"/"pvc-s5jgt" ... skipping 180 lines ... Jun 21 22:30:02.781: INFO: PersistentVolumeClaim pvc-qz9kx found but phase is Pending instead of Bound. Jun 21 22:30:04.884: INFO: PersistentVolumeClaim pvc-qz9kx found and phase=Bound (2.204426994s) [1mSTEP[0m: checking the PVC [1mSTEP[0m: validating provisioned PV [1mSTEP[0m: checking the PV [1mSTEP[0m: deploying the pod [1mSTEP[0m: checking that the pods command exits with no error Jun 21 22:30:05.191: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-bqxpl" in namespace "azurefile-2546" to be "Succeeded or Failed" Jun 21 22:30:05.292: INFO: Pod "azurefile-volume-tester-bqxpl": Phase="Pending", Reason="", readiness=false. Elapsed: 101.899868ms Jun 21 22:30:07.400: INFO: Pod "azurefile-volume-tester-bqxpl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209824544s Jun 21 22:30:09.510: INFO: Pod "azurefile-volume-tester-bqxpl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.319033579s [1mSTEP[0m: Saw pod success Jun 21 22:30:09.510: INFO: Pod "azurefile-volume-tester-bqxpl" satisfied condition "Succeeded or Failed" [1mSTEP[0m: resizing the pvc [1mSTEP[0m: sleep 30s waiting for resize complete [1mSTEP[0m: checking the resizing result [1mSTEP[0m: checking the resizing PV result Jun 21 22:30:39.928: FAIL: newPVCSize(11Gi) is not equal to newPVSize(10GiGi) Full Stack Trace sigs.k8s.io/azurefile-csi-driver/test/e2e.glob..func1.10() /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380 +0x25c sigs.k8s.io/azurefile-csi-driver/test/e2e.TestE2E(0x0?) /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/suite_test.go:239 +0x11f ... skipping 22 lines ... Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:05 +0000 UTC - event for azurefile-volume-tester-bqxpl: {default-scheduler } Scheduled: Successfully assigned azurefile-2546/azurefile-volume-tester-bqxpl to capz-qx7od9-mp-0000001 Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:06 +0000 UTC - event for azurefile-volume-tester-bqxpl: {kubelet capz-qx7od9-mp-0000001} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:06 +0000 UTC - event for azurefile-volume-tester-bqxpl: {kubelet capz-qx7od9-mp-0000001} Created: Created container volume-tester Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:06 +0000 UTC - event for azurefile-volume-tester-bqxpl: {kubelet capz-qx7od9-mp-0000001} Started: Started container volume-tester Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:09 +0000 UTC - event for pvc-qz9kx: {volume_expand } ExternalExpanding: CSI migration enabled for kubernetes.io/azure-file; waiting for external resizer to expand the pvc Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:09 +0000 UTC - event for pvc-qz9kx: {external-resizer file.csi.azure.com } Resizing: External resizer is resizing volume pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1 Jun 21 22:30:45.793: INFO: At 2022-06-21 22:30:09 +0000 UTC - event for pvc-qz9kx: {external-resizer file.csi.azure.com } VolumeResizeFailed: resize volume "pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1" by resizer "file.csi.azure.com" failed: rpc error: code = Unimplemented desc = vhd disk volume(capz-qx7od9#f0a7fe6cb036841ae869516#pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1#pvc-164df6eb-b1d2-4a2a-9537-298e19661ce1#azurefile-2546) is not supported on ControllerExpandVolume Jun 21 22:30:45.895: INFO: POD NODE PHASE GRACE CONDITIONS Jun 21 22:30:45.895: INFO: Jun 21 22:30:46.043: INFO: Logging node info for node capz-qx7od9-control-plane-cs87r Jun 21 22:30:46.156: INFO: Node Info: &Node{ObjectMeta:{capz-qx7od9-control-plane-cs87r b91cfba9-f252-4a0f-8bd7-1e6713bdafea 2248 0 2022-06-21 22:21:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:uksouth failure-domain.beta.kubernetes.io/zone:uksouth-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-qx7od9-control-plane-cs87r kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:uksouth topology.kubernetes.io/zone:uksouth-2] map[cluster.x-k8s.io/cluster-name:capz-qx7od9 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-qx7od9-control-plane-cmgv4 cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-qx7od9-control-plane csi.volume.kubernetes.io/nodeid:{"file.csi.azure.com":"capz-qx7od9-control-plane-cs87r"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.111.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-06-21 22:21:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-06-21 22:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-06-21 22:21:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-06-21 22:21:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-06-21 22:21:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-06-21 22:29:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-qx7od9/providers/Microsoft.Compute/virtualMachines/capz-qx7od9-control-plane-cs87r,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-21 22:21:43 +0000 UTC,LastTransitionTime:2022-06-21 22:21:43 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-21 22:29:19 +0000 UTC,LastTransitionTime:2022-06-21 22:20:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-21 22:29:19 +0000 UTC,LastTransitionTime:2022-06-21 22:20:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-21 22:29:19 +0000 UTC,LastTransitionTime:2022-06-21 22:20:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-21 22:29:19 +0000 UTC,LastTransitionTime:2022-06-21 22:21:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-qx7od9-control-plane-cs87r,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:db783d92dfe146a4a12f52ae78a4d762,SystemUUID:2520b718-88f8-d840-8ca3-58ee64b06d04,BootID:c48a5460-788e-4aca-b292-f32c2ed361ac,KernelVersion:5.4.0-1085-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.25.0-alpha.1.65+74cff58e7e74a4,KubeProxyVersion:v1.25.0-alpha.1.65+74cff58e7e74a4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[mcr.microsoft.com/k8s/csi/azurefile-csi@sha256:d0e18e2b41040f7a0a68324bed4b1cdc94e0d5009ed816f9c00f7ad45f640c67 mcr.microsoft.com/k8s/csi/azurefile-csi:latest],SizeBytes:75743702,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:7e75c20c0fb0a334fa364546ece4c11a61a7595ce2e27de265cacb4e7ccc7f9f k8s.gcr.io/kube-proxy:v1.24.2],SizeBytes:39515830,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.25.0-alpha.1.59_1ceca7b139e141 k8s.gcr.io/kube-proxy:v1.25.0-alpha.1.59_1ceca7b139e141],SizeBytes:39501121,},ContainerImage{Names:[capzci.azurecr.io/kube-proxy@sha256:63ac248c3589981554b9dc7356aa74f16569bf5e75b7af5db281496aeea1d2b7 capzci.azurecr.io/kube-proxy:v1.25.0-alpha.1.65_74cff58e7e74a4],SizeBytes:39499263,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:433696d8a90870c405fc2d42020aff0966fb3f1c59bdd1f5077f41335b327c9a k8s.gcr.io/kube-apiserver:v1.24.2],SizeBytes:33795763,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.25.0-alpha.1.59_1ceca7b139e141 k8s.gcr.io/kube-apiserver:v1.25.0-alpha.1.59_1ceca7b139e141],SizeBytes:33779251,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:d255427f14c9236088c22cd94eb434d7c6a05f615636eac0b9681566cd142753 k8s.gcr.io/kube-controller-manager:v1.24.2],SizeBytes:31035052,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.25.0-alpha.1.59_1ceca7b139e141 k8s.gcr.io/kube-controller-manager:v1.25.0-alpha.1.59_1ceca7b139e141],SizeBytes:31010648,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.25.0-alpha.1.59_1ceca7b139e141 k8s.gcr.io/kube-scheduler:v1.25.0-alpha.1.59_1ceca7b139e141],SizeBytes:15533663,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:b5bc69ac1e173a58a2b3af11ba65057ff2b71de25d0f93ab947e16714a896a1f k8s.gcr.io/kube-scheduler:v1.24.2],SizeBytes:15488980,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:2fbd1e1a0538a06f2061afd45975df70c942654aa7f86e920720169ee439c2d6 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.1],SizeBytes:9578961,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:31547791294872570393470991481c2477a311031d3a03e0ae54eb164347dc34 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0],SizeBytes:8689744,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 21 22:30:46.157: INFO: ... skipping 643 lines ... JUnit report was created: /logs/artifacts/junit_01.xml [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mDynamic Provisioning [0m[91m[1m[It] should create a volume on demand and resize it [kubernetes.io/azure-file] [file.csi.azure.com] [Windows] [0m [37m/home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:380[0m [1m[91mRan 6 of 34 Specs in 354.521 seconds[0m [1m[91mFAIL![0m -- [32m[1m5 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m28 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m ... skipping 5 lines ... [38;5;9m[1mIf this change will be impactful to you please leave a comment on [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m[0m [1mLearn more at:[0m [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#removed-custom-reporters[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=1.16.5[0m --- FAIL: TestE2E (354.53s) FAIL FAIL sigs.k8s.io/azurefile-csi-driver/test/e2e 354.590s FAIL make: *** [Makefile:85: e2e-test] Error 1 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME capz-qx7od9-control-plane-cs87r Ready control-plane 10m v1.25.0-alpha.1.65+74cff58e7e74a4 10.0.0.4 <none> Ubuntu 18.04.6 LTS 5.4.0-1085-azure containerd://1.6.2 capz-qx7od9-mp-0000000 Ready <none> 8m24s v1.25.0-alpha.1.65+74cff58e7e74a4 10.1.0.4 <none> Ubuntu 18.04.6 LTS 5.4.0-1085-azure containerd://1.6.2 capz-qx7od9-mp-0000001 Ready <none> 8m23s v1.25.0-alpha.1.65+74cff58e7e74a4 10.1.0.5 <none> Ubuntu 18.04.6 LTS 5.4.0-1085-azure containerd://1.6.2 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-57cb778775-kx42t 1/1 Running 0 10m 192.168.111.130 capz-qx7od9-control-plane-cs87r <none> <none> ... skipping 43 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-qz2dq [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-node-6d9b8, container node-driver-registrar [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-controller-8565959cf4-6fjrd, container csi-attacher [1mSTEP[0m: Collecting events for Pod kube-system/csi-azurefile-node-7svxw [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-node-nx8wl, container liveness-probe [1mSTEP[0m: Collecting events for Pod kube-system/etcd-capz-qx7od9-control-plane-cs87r [1mSTEP[0m: failed to find events of Pod "etcd-capz-qx7od9-control-plane-cs87r" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-qx7od9-control-plane-cs87r, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-node-nx8wl, container node-driver-registrar [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-controller-8565959cf4-nkkpl, container csi-provisioner [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-controller-8565959cf4-6fjrd, container csi-snapshotter [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-node-7svxw, container liveness-probe [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-controller-8565959cf4-6fjrd, container liveness-probe [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-capz-qx7od9-control-plane-cs87r [1mSTEP[0m: failed to find events of Pod "kube-apiserver-capz-qx7od9-control-plane-cs87r" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-qx7od9-control-plane-cs87r, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-controller-8565959cf4-nkkpl, container csi-attacher [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-controller-8565959cf4-6fjrd, container azurefile [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-controller-8565959cf4-6fjrd, container csi-resizer [1mSTEP[0m: Collecting events for Pod kube-system/csi-azurefile-node-nx8wl [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-57cb778775-kx42t, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/csi-snapshot-controller-789545b454-7dkkw, container csi-snapshot-controller [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-node-6d9b8, container azurefile [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-capz-qx7od9-control-plane-cs87r [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-capz-qx7od9-control-plane-cs87r" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-7bmb2, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-controller-8565959cf4-nkkpl, container csi-snapshotter [1mSTEP[0m: Creating log watcher for controller kube-system/csi-azurefile-node-7svxw, container azurefile [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-capz-qx7od9-control-plane-cs87r, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/csi-snapshot-controller-789545b454-7dkkw [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-gmknk, container calico-node ... skipping 19 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/csi-snapshot-controller-789545b454-x8p9v, container csi-snapshot-controller [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-l99h2 [1mSTEP[0m: Collecting events for Pod kube-system/csi-snapshot-controller-789545b454-x8p9v [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-tvds9 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-v4lnh, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-v4lnh [1mSTEP[0m: failed to find events of Pod "kube-scheduler-capz-qx7od9-control-plane-cs87r" [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-controller-manager-capz-qx7od9-control-plane-cs87r, container kube-controller-manager: container "kube-controller-manager" in pod "kube-controller-manager-capz-qx7od9-control-plane-cs87r" is not available [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-apiserver-capz-qx7od9-control-plane-cs87r, container kube-apiserver: container "kube-apiserver" in pod "kube-apiserver-capz-qx7od9-control-plane-cs87r" is not available [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-scheduler-capz-qx7od9-control-plane-cs87r, container kube-scheduler: container "kube-scheduler" in pod "kube-scheduler-capz-qx7od9-control-plane-cs87r" is not available [1mSTEP[0m: Fetching activity logs took 1.149346749s ================ REDACTING LOGS ================ All sensitive variables are redacted make: Entering directory '/home/prow/go/src/k8s.io/kubernetes' +++ [0621 22:33:50] Verifying Prerequisites.... +++ [0621 22:33:54] Removing _output directory ... skipping 12 lines ...