This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2 succeeded
Started2023-03-09 16:18
Elapsed1h17m
Revisionrelease-1.7

Test Failures


capz-e2e [It] Conformance Tests conformance-tests 1h5m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error:
    <*errors.withStack | 0xc000b99278>: {
        error: <*errors.withMessage | 0xc000a123a0>{
            cause: <*errors.errorString | 0xc0008fd030>{
                s: "error container run failed with exit code 1",
            },
            msg: "Unable to run conformance tests",
        },
        stack: [0x3385599, 0x3613f07, 0x19306fb, 0x19441f8, 0x14c5741],
    }
    Unable to run conformance tests: error container run failed with exit code 1
occurred
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 03/09/23 17:21:54.345

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 2 Passed Tests

Show 25 Skipped Tests

Error lines from build-log.txt

... skipping 540 lines ...
------------------------------
Conformance Tests conformance-tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100
  INFO: Cluster name is capz-conf-36g11k
  STEP: Creating namespace "capz-conf-36g11k" for hosting the cluster @ 03/09/23 16:28:57.866
  Mar  9 16:28:57.866: INFO: starting to create namespace for hosting the "capz-conf-36g11k" test spec
2023/03/09 16:28:57 failed trying to get namespace (capz-conf-36g11k):namespaces "capz-conf-36g11k" not found
  INFO: Creating namespace capz-conf-36g11k
  INFO: Creating event watcher for namespace "capz-conf-36g11k"
  conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 03/09/23 16:28:57.913
    conformance-tests
    Name | N | Min | Median | Mean | StdDev | Max
  INFO: Creating the workload cluster with name "capz-conf-36g11k" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.26.3-rc.0.16+577f97e00e4195, 1 control-plane machines, 0 worker machines)
... skipping 480 lines ...
  [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/common/network/networking.go:82
  ------------------------------
  SSSSSSS
  ------------------------------
  • [SLOW TEST] [11.197 seconds]
  [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]

  test/e2e/apimachinery/webhook.go:239
  ------------------------------
  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  • [SLOW TEST] [27.377 seconds]
  [sig-windows] [Feature:Windows] Kubelet-Stats Kubelet stats collection for Windows nodes when running 3 pods should return within 10 seconds
... skipping 363 lines ...
  [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]
  test/e2e/kubectl/kubectl.go:1713
  ------------------------------
  SSSSSSSSSSSSSSSS
  ------------------------------
  • [SLOW TEST] [22.862 seconds]
  [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers metrics should report count of started and failed to start HostProcess containers

  test/e2e/windows/host_process.go:510
  ------------------------------
  SSSSSSSS
  ------------------------------
  • [SLOW TEST] [13.801 seconds]
  [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
... skipping 70 lines ...
  • [SLOW TEST] [29.006 seconds]
  [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  test/e2e/common/node/container_probe.go:231
  ------------------------------
  SSS
  ------------------------------
  • [FAILED] [54.914 seconds]

  [sig-auth] ServiceAccounts
  test/e2e/auth/framework.go:23
    [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
    test/e2e/auth/service_accounts.go:531
  
    Begin Captured GinkgoWriter Output >>
... skipping 6 lines ...
      STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 03/09/23 17:02:01.177
      [BeforeEach] [sig-auth] ServiceAccounts
        test/e2e/framework/metrics/init/init.go:31
      [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
        test/e2e/auth/service_accounts.go:531
      Mar  9 17:02:01.618: INFO: created pod
      Mar  9 17:02:01.618: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-3289" to be "Succeeded or Failed"

      Mar  9 17:02:01.726: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 108.596605ms
      Mar  9 17:02:03.842: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223789791s
      Mar  9 17:02:05.841: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222821374s
      Mar  9 17:02:07.842: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 6.224142922s
      Mar  9 17:02:09.842: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 8.224160546s
      Mar  9 17:02:11.840: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 10.222467521s
      Mar  9 17:02:13.840: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 12.222444279s
      Mar  9 17:02:15.841: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 14.223569932s
      Mar  9 17:02:17.841: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=false. Elapsed: 16.222791638s
      Mar  9 17:02:19.841: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=false. Elapsed: 18.223161786s
      Mar  9 17:02:21.841: INFO: Pod "oidc-discovery-validator": Phase="Failed", Reason="", readiness=false. Elapsed: 20.222844352s

      Mar  9 17:02:51.841: INFO: polling logs
      Mar  9 17:02:51.978: INFO: Pod logs: 
      I0309 17:02:06.439126   17064 log.go:198] OK: Got token
      I0309 17:02:06.742647   17064 log.go:198] validating with in-cluster discovery
      I0309 17:02:06.764745   17064 log.go:198] OK: got issuer https://kubernetes.default.svc.cluster.local
      I0309 17:02:06.764745   17064 log.go:198] Full, not-validated claims: 
      openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3289:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1678381921, NotBefore:1678381321, IssuedAt:1678381321, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3289", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"ffcec35b-60a1-4767-8987-0ec04e066063"}}}
      I0309 17:02:06.764745   17064 log.go:198] Ensuring Windows DNS availability
      I0309 17:02:06.806913   17064 log.go:198] OK: Resolved host kubernetes.default.svc.cluster.local: [10.96.0.1]
      I0309 17:02:16.823859   17064 log.go:198] failed to validate with in-cluster discovery: Get "https://kubernetes.default.svc.cluster.local/.well-known/openid-configuration": net/http: TLS handshake timeout

      I0309 17:02:16.823902   17064 log.go:198] falling back to validating with external discovery
      I0309 17:02:16.823933   17064 log.go:198] OK: got issuer https://kubernetes.default.svc.cluster.local
      I0309 17:02:16.823933   17064 log.go:198] Full, not-validated claims: 
      openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3289:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1678381921, NotBefore:1678381321, IssuedAt:1678381321, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3289", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"ffcec35b-60a1-4767-8987-0ec04e066063"}}}
      I0309 17:02:16.823933   17064 log.go:198] Ensuring Windows DNS availability
      I0309 17:02:16.824624   17064 log.go:198] OK: Resolved host kubernetes.default.svc.cluster.local: [10.96.0.1]
      I0309 17:02:16.956071   17064 log.go:198] Get "https://kubernetes.default.svc.cluster.local/.well-known/openid-configuration": x509: certificate signed by unknown authority
  
      Mar  9 17:02:51.978: INFO: Unexpected error: 

          <*fmt.wrapError | 0xc0030ca500>: {
              msg: "error while waiting for pod svcaccounts-3289/oidc-discovery-validator to be Succeeded or Failed: pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:17 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:17 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:01 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.5 PodIP:192.168.144.47 PodIPs:[{IP:192.168.144.47}] StartTime:2023-03-09 17:02:01 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-03-09 17:02:06 +0000 UTC,FinishedAt:2023-03-09 17:02:16 +0000 UTC,ContainerID:containerd://a31907feab8f68640182a710061b2f122c8e856c65fffb369bb6c90e96c5b348,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/agnhost:2.43 ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e ContainerID:containerd://a31907feab8f68640182a710061b2f122c8e856c65fffb369bb6c90e96c5b348 Started:0xc005f9759d}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",

              err: <*errors.errorString | 0xc00159fd00>{
                  s: "pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:17 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:17 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:01 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.5 PodIP:192.168.144.47 PodIPs:[{IP:192.168.144.47}] StartTime:2023-03-09 17:02:01 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-03-09 17:02:06 +0000 UTC,FinishedAt:2023-03-09 17:02:16 +0000 UTC,ContainerID:containerd://a31907feab8f68640182a710061b2f122c8e856c65fffb369bb6c90e96c5b348,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/agnhost:2.43 ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e ContainerID:containerd://a31907feab8f68640182a710061b2f122c8e856c65fffb369bb6c90e96c5b348 Started:0xc005f9759d}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",

              },
          }
      Mar  9 17:02:51.978: FAIL: error while waiting for pod svcaccounts-3289/oidc-discovery-validator to be Succeeded or Failed: pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:17 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:17 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:01 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.5 PodIP:192.168.144.47 PodIPs:[{IP:192.168.144.47}] StartTime:2023-03-09 17:02:01 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-03-09 17:02:06 +0000 UTC,FinishedAt:2023-03-09 17:02:16 +0000 UTC,ContainerID:containerd://a31907feab8f68640182a710061b2f122c8e856c65fffb369bb6c90e96c5b348,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/agnhost:2.43 ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e ContainerID:containerd://a31907feab8f68640182a710061b2f122c8e856c65fffb369bb6c90e96c5b348 Started:0xc005f9759d}] QOSClass:BestEffort EphemeralContainerStatuses:[]}

  
      Full Stack Trace
      k8s.io/kubernetes/test/e2e/auth.glob..func5.7()
      	test/e2e/auth/service_accounts.go:637 +0xc92
      [AfterEach] [sig-auth] ServiceAccounts
        test/e2e/framework/node/init/init.go:32
... skipping 7 lines ...
      STEP: Found 4 events. 03/09/23 17:02:52.352
      Mar  9 17:02:52.352: INFO: At 2023-03-09 17:02:01 +0000 UTC - event for oidc-discovery-validator: {default-scheduler } Scheduled: Successfully assigned svcaccounts-3289/oidc-discovery-validator to capz-conf-sm8p4
      Mar  9 17:02:52.352: INFO: At 2023-03-09 17:02:04 +0000 UTC - event for oidc-discovery-validator: {kubelet capz-conf-sm8p4} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine
      Mar  9 17:02:52.352: INFO: At 2023-03-09 17:02:04 +0000 UTC - event for oidc-discovery-validator: {kubelet capz-conf-sm8p4} Created: Created container oidc-discovery-validator
      Mar  9 17:02:52.352: INFO: At 2023-03-09 17:02:06 +0000 UTC - event for oidc-discovery-validator: {kubelet capz-conf-sm8p4} Started: Started container oidc-discovery-validator
      Mar  9 17:02:52.465: INFO: POD                       NODE             PHASE   GRACE  CONDITIONS
      Mar  9 17:02:52.465: INFO: oidc-discovery-validator  capz-conf-sm8p4  Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-09 17:02:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-09 17:02:17 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-09 17:02:17 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-09 17:02:01 +0000 UTC  }]

      Mar  9 17:02:52.465: INFO: 
      Mar  9 17:02:52.734: INFO: 
      Logging node info for node capz-conf-36g11k-control-plane-2wmgz
      Mar  9 17:02:52.859: INFO: Node Info: &Node{ObjectMeta:{capz-conf-36g11k-control-plane-2wmgz    4275257d-897a-4bc3-821f-88a3ed1c1b81 13987 0 2023-03-09 16:34:26 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_B2s beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westeurope failure-domain.beta.kubernetes.io/zone:westeurope-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-36g11k-control-plane-2wmgz kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_B2s topology.disk.csi.azure.com/zone:westeurope-2 topology.kubernetes.io/region:westeurope topology.kubernetes.io/zone:westeurope-2] map[cluster.x-k8s.io/cluster-name:capz-conf-36g11k cluster.x-k8s.io/cluster-namespace:capz-conf-36g11k cluster.x-k8s.io/machine:capz-conf-36g11k-control-plane-xw7vp cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-36g11k-control-plane csi.volume.kubernetes.io/nodeid:{"csi.tigera.io":"capz-conf-36g11k-control-plane-2wmgz","disk.csi.azure.com":"capz-conf-36g11k-control-plane-2wmgz"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.136.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-09 16:34:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-09 16:34:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2023-03-09 16:34:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2023-03-09 16:35:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {calico-node Update v1 2023-03-09 16:35:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-03-09 17:02:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.disk.csi.azure.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-conf-36g11k/providers/Microsoft.Compute/virtualMachines/capz-conf-36g11k-control-plane-2wmgz,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4123181056 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4018323456 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-03-09 16:35:46 +0000 UTC,LastTransitionTime:2023-03-09 16:35:46 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-09 17:02:44 +0000 UTC,LastTransitionTime:2023-03-09 16:34:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-09 17:02:44 +0000 UTC,LastTransitionTime:2023-03-09 16:34:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-09 17:02:44 +0000 UTC,LastTransitionTime:2023-03-09 16:34:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-09 17:02:44 +0000 UTC,LastTransitionTime:2023-03-09 16:35:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-36g11k-control-plane-2wmgz,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fd08d47c9bab4281969a398ac708da96,SystemUUID:281d014a-487f-d541-aec8-a4254d70f98f,BootID:39ae4cb6-3dad-45e5-921b-b0f59b9060cf,KernelVersion:5.4.0-1104-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.18,KubeletVersion:v1.26.3-rc.0.16+577f97e00e4195,KubeProxyVersion:v1.26.3-rc.0.16+577f97e00e4195,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.26.3-rc.0.16_577f97e00e4195 registry.k8s.io/kube-apiserver-amd64:v1.26.3-rc.0.16_577f97e00e4195 registry.k8s.io/kube-apiserver:v1.26.3-rc.0.16_577f97e00e4195],SizeBytes:135288158,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.26.3-rc.0.16_577f97e00e4195 registry.k8s.io/kube-controller-manager-amd64:v1.26.3-rc.0.16_577f97e00e4195 registry.k8s.io/kube-controller-manager:v1.26.3-rc.0.16_577f97e00e4195],SizeBytes:124663288,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977 docker.io/calico/cni:v3.25.0],SizeBytes:87984941,},ContainerImage{Names:[docker.io/calico/node@sha256:a85123d1882832af6c45b5e289c6bb99820646cb7d4f6006f98095168808b1e6 docker.io/calico/node:v3.25.0],SizeBytes:87185935,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner@sha256:3ef7d954946bd1cf9e5e3564a8d1acf8e5852616f7ae96bcbc5ced8c275483ee mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0],SizeBytes:61391360,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-resizer@sha256:9ba6483d2f8aa6051cb3a50e42d638fc17a6e4699a6689f054969024b7c12944 mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0],SizeBytes:58560473,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-attacher@sha256:bc317fea7e7bbaff65130d7ac6ea7c96bc15eb1f086374b8c3359f11988ac024 mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0],SizeBytes:57948644,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.26.3-rc.0.16_577f97e00e4195 registry.k8s.io/kube-scheduler-amd64:v1.26.3-rc.0.16_577f97e00e4195 registry.k8s.io/kube-scheduler:v1.26.3-rc.0.16_577f97e00e4195],SizeBytes:57763307,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi@sha256:5f9044f5ddfba19c4fcb1d4c41984d17b72c1050692bcaeaee3a1e93cd0a17ca mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.0],SizeBytes:56451605,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.26.3-rc.0.16_577f97e00e4195 registry.k8s.io/kube-proxy-amd64:v1.26.3-rc.0.16_577f97e00e4195 registry.k8s.io/kube-proxy:v1.26.3-rc.0.16_577f97e00e4195],SizeBytes:52708366,},ContainerImage{Names:[docker.io/calico/apiserver@sha256:9819c1b569e60eec4dbab82c1b41cee80fe8af282b25ba2c174b2a00ae555af6 docker.io/calico/apiserver:v3.25.0],SizeBytes:35624155,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:0f03b93af45f39704b7da175db31e20da63d2ab369f350e59de8cbbef9d703e0 registry.k8s.io/kube-apiserver:v1.26.2],SizeBytes:35329425,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:5434d52f88eb16bc5e98ccb65e97e97cb5cf7861749afbf26174d27c4ece1fad registry.k8s.io/kube-controller-manager:v1.26.2],SizeBytes:32180749,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:c45af3a9692d87a527451cf544557138fedf86f92b6e39bf2003e2fdb848dce3 docker.io/calico/kube-controllers:v3.25.0],SizeBytes:31271800,},ContainerImage{Names:[docker.io/calico/typha@sha256:f7e0557e03f422c8ba5fcf64ef0fac054ee99935b5d101a0a50b5e9b65f6a5c5 docker.io/calico/typha:v3.25.0],SizeBytes:28533187,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter@sha256:a889e925e15f9423f7842f1b769f64cbcf6a20b6956122836fc835cf22d9073f mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1],SizeBytes:22192414,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:5dac6611aceb1452a5d4036108a15ceb0699c083a942977e30640d521e7d2078 registry.k8s.io/kube-proxy:v1.26.2],SizeBytes:21541935,},ContainerImage{Names:[quay.io/tigera/operator@sha256:89eef35e1bbe8c88792ce69c3f3f38fb9838e58602c570524350b5f3ab127582 quay.io/tigera/operator:v1.29.0],SizeBytes:21108896,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:da109877fd8fd0feba2f9a4cb6a199797452c17ddcfaf7b023cf0bac09e51417 registry.k8s.io/kube-scheduler:v1.26.2],SizeBytes:17489559,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/calico/node-driver-registrar@sha256:f559ee53078266d2126732303f588b9d4266607088e457ea04286f31727676f7 docker.io/calico/node-driver-registrar:v3.25.0],SizeBytes:11133658,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar@sha256:515b883deb0ae8d58eef60312f4d460ff8a3f52a2a5e487c94a8ebb2ca362720 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.6.2],SizeBytes:10076715,},ContainerImage{Names:[mcr.microsoft.com/oss/kubernetes-csi/livenessprobe@sha256:fcb73e1939d9abeb2d1e1680b476a10a422a04a73ea5a65e64eec3fde1f2a5a1 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0],SizeBytes:9117963,},ContainerImage{Names:[docker.io/calico/csi@sha256:61a95f3ee79a7e591aff9eff535be73e62d2c3931d07c2ea8a1305f7bea19b31 docker.io/calico/csi:v3.25.0],SizeBytes:9076936,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:01ddd57d428787b3ac689daa685660defe4bd7810069544bd43a9103a7b0a789 docker.io/calico/pod2daemon-flexvol:v3.25.0],SizeBytes:7076045,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
      Mar  9 17:02:52.860: INFO: 
      Logging kubelet events for node capz-conf-36g11k-control-plane-2wmgz
... skipping 105 lines ...
      Latency metrics for node capz-conf-sm8p4
      [DeferCleanup (Each)] [sig-auth] ServiceAccounts
        tear down framework | framework.go:193
      STEP: Destroying namespace "svcaccounts-3289" for this suite. 03/09/23 17:02:55.435
    << End Captured GinkgoWriter Output
  
    Mar  9 17:02:51.978: error while waiting for pod svcaccounts-3289/oidc-discovery-validator to be Succeeded or Failed: pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:17 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:17 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-03-09 17:02:01 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.5 PodIP:192.168.144.47 PodIPs:[{IP:192.168.144.47}] StartTime:2023-03-09 17:02:01 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-03-09 17:02:06 +0000 UTC,FinishedAt:2023-03-09 17:02:16 +0000 UTC,ContainerID:containerd://a31907feab8f68640182a710061b2f122c8e856c65fffb369bb6c90e96c5b348,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/agnhost:2.43 ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e ContainerID:containerd://a31907feab8f68640182a710061b2f122c8e856c65fffb369bb6c90e96c5b348 Started:0xc005f9759d}] QOSClass:BestEffort EphemeralContainerStatuses:[]}

    In [It] at: test/e2e/auth/service_accounts.go:637
  ------------------------------
  SSSSSSS
  ------------------------------
  • [SLOW TEST] [78.862 seconds]
  [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
... skipping 257 lines ...
  [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/common/node/pods.go:398
  ------------------------------
  SSSSSSSSSSSSS
  ------------------------------
  • [SLOW TEST] [119.774 seconds]
  [sig-apps] CronJob should delete failed finished jobs with limit of one job

  test/e2e/apps/cronjob.go:291
  ------------------------------
  S
  ------------------------------
  • [SLOW TEST] [211.195 seconds]
  [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
... skipping 309 lines ...
  [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/common/storage/downwardapi_volume.go:193
  ------------------------------
  SSSSSSSSSSSSSSSSS
  ------------------------------
  • [SLOW TEST] [65.554 seconds]
  [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

  test/e2e/common/node/init_container.go:334
  ------------------------------
  SSSSSSSSSSSSS
  ------------------------------
  • [SLOW TEST] [60.321 seconds]
  [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
... skipping 153 lines ...
  [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/network/service.go:1557
  ------------------------------
  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  • [SLOW TEST] [19.311 seconds]
  [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

  test/e2e/common/node/init_container.go:458
  ------------------------------
  SSSS
  ------------------------------
  • [SLOW TEST] [18.614 seconds]
  [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/apimachinery/resource_quota.go:803
  ------------------------------
  SSSSSSSSSSSSSSSSSS
  ------------------------------
  • [SLOW TEST] [39.248 seconds]
  [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

  test/e2e/apps/job.go:426
  ------------------------------
  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  • [SLOW TEST] [33.533 seconds]
  [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
... skipping 53 lines ...
  [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/common/node/container_probe.go:215
  ------------------------------
  
  
  Summarizing 1 Failure:
    [FAIL] [sig-auth] ServiceAccounts [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]

    test/e2e/auth/service_accounts.go:637
  
  Ran 337 of 7069 Specs in 2058.022 seconds
  FAIL! -- 336 Passed | 1 Failed | 0 Pending | 6732 Skipped

  
  I0309 16:47:35.218707      13 e2e.go:126] Starting e2e run "9dd3976c-70a0-46b2-be6a-fc238f5855c3" on Ginkgo node 1
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
... skipping 9 lines ...
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.4.0
  
  --- FAIL: TestE2E (1928.23s)

  FAIL

  
  I0309 16:47:35.204680      16 e2e.go:126] Starting e2e run "3fccad7c-7b40-4199-9a5a-67b0a0717c1c" on Ginkgo node 3
  You're using deprecated Ginkgo functionality:
  =============================================
    --ginkgo.flakeAttempts is deprecated, use --ginkgo.flake-attempts instead
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags
... skipping 14 lines ...
  
  PASS
  
  
  Ginkgo ran 1 suite in 34m18.92955714s
  
  Test Suite Failed

  You're using deprecated Ginkgo functionality:
  =============================================
    --slowSpecThreshold is deprecated use --slow-spec-threshold instead and pass in a duration string (e.g. '5s', not '5.0')
    Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed--slowspecthreshold
  
  To silence deprecations that can be silenced set the following environment variable:
    ACK_GINKGO_DEPRECATIONS=2.4.0
  
  [FAILED] in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 03/09/23 17:21:54.345
  Mar  9 17:21:54.346: INFO: FAILED!
  Mar  9 17:21:54.346: INFO: Cleaning up after "Conformance Tests conformance-tests" spec
  STEP: Dumping logs from the "capz-conf-36g11k" workload cluster @ 03/09/23 17:21:54.346
  Mar  9 17:21:54.346: INFO: Dumping workload cluster capz-conf-36g11k/capz-conf-36g11k logs
  Mar  9 17:21:54.391: INFO: Collecting logs for Linux node capz-conf-36g11k-control-plane-2wmgz in cluster capz-conf-36g11k in namespace capz-conf-36g11k

  Mar  9 17:22:14.077: INFO: Collecting boot logs for AzureMachine capz-conf-36g11k-control-plane-2wmgz

  Mar  9 17:22:15.897: INFO: Collecting logs for Windows node capz-conf-sm8p4 in cluster capz-conf-36g11k in namespace capz-conf-36g11k

  Mar  9 17:24:31.788: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-sm8p4 to /logs/artifacts/clusters/capz-conf-36g11k/machines/capz-conf-36g11k-md-win-846456b994-f5wtr/crashdumps.tar
  Mar  9 17:24:35.129: INFO: Collecting boot logs for AzureMachine capz-conf-36g11k-md-win-sm8p4

Failed to get logs for Machine capz-conf-36g11k-md-win-846456b994-f5wtr, Cluster capz-conf-36g11k/capz-conf-36g11k: running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1
  Mar  9 17:24:36.465: INFO: Collecting logs for Windows node capz-conf-kcstz in cluster capz-conf-36g11k in namespace capz-conf-36g11k

  Mar  9 17:26:54.684: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-kcstz to /logs/artifacts/clusters/capz-conf-36g11k/machines/capz-conf-36g11k-md-win-846456b994-mrcvb/crashdumps.tar
  Mar  9 17:26:58.103: INFO: Collecting boot logs for AzureMachine capz-conf-36g11k-md-win-kcstz

Failed to get logs for Machine capz-conf-36g11k-md-win-846456b994-mrcvb, Cluster capz-conf-36g11k/capz-conf-36g11k: running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1
  Mar  9 17:26:59.377: INFO: Dumping workload cluster capz-conf-36g11k/capz-conf-36g11k kube-system pod logs
  Mar  9 17:27:00.521: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-58685bd9d8-4trjz, container calico-apiserver
  Mar  9 17:27:00.521: INFO: Describing Pod calico-apiserver/calico-apiserver-58685bd9d8-4trjz
  Mar  9 17:27:01.009: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-58685bd9d8-d7vsx, container calico-apiserver
  Mar  9 17:27:01.009: INFO: Describing Pod calico-apiserver/calico-apiserver-58685bd9d8-d7vsx
  Mar  9 17:27:01.229: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-6b7b9c649d-n7hnq, container calico-kube-controllers
... skipping 69 lines ...
  INFO: Waiting for the Cluster capz-conf-36g11k/capz-conf-36g11k to be deleted
  STEP: Waiting for cluster capz-conf-36g11k to be deleted @ 03/09/23 17:27:12.624
  Mar  9 17:34:02.844: INFO: Deleting namespace used for hosting the "conformance-tests" test spec
  INFO: Deleting namespace capz-conf-36g11k
  Mar  9 17:34:02.865: INFO: Checking if any resources are left over in Azure for spec "conformance-tests"
  STEP: Redacting sensitive information from logs @ 03/09/23 17:34:03.396
• [FAILED] [3923.845 seconds]
Conformance Tests [It] conformance-tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100

  [FAILED] Unexpected error:
      <*errors.withStack | 0xc000b99278>: {
          error: <*errors.withMessage | 0xc000a123a0>{
              cause: <*errors.errorString | 0xc0008fd030>{
                  s: "error container run failed with exit code 1",
              },
              msg: "Unable to run conformance tests",
          },
          stack: [0x3385599, 0x3613f07, 0x19306fb, 0x19441f8, 0x14c5741],
      }
      Unable to run conformance tests: error container run failed with exit code 1
  occurred
  In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 03/09/23 17:21:54.345

  Full Stack Trace
    sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func3.2()
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 +0x18fa
... skipping 6 lines ...
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.009 seconds]
------------------------------

Summarizing 1 Failure:
  [FAIL] Conformance Tests [It] conformance-tests
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238

Ran 1 of 26 Specs in 4121.809 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 25 Skipped
--- FAIL: TestE2E (4121.82s)
FAIL
You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:289
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:292

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.6.0


Ginkgo ran 1 suite in 1h11m38.562104525s

Test Suite Failed
make[3]: *** [Makefile:663: test-e2e-run] Error 1
make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: *** [Makefile:678: test-e2e-skip-push] Error 2
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[1]: *** [Makefile:694: test-conformance] Error 2
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:704: test-windows-upstream] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 6 lines ...