This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjackfrancis: Update default k8s version to v1.25 for testing
ResultFAILURE
Tests 1 failed / 26 succeeded
Started2023-01-25 01:37
Elapsed1h13m
Revisionaa4b89f70338b5bf172b792cbe9a26a0f73595d6
Refs 3088

Test Failures


capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node 43m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
[FAILED] Timed out after 1500.001s.

Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-zd7rh:
I0125 01:54:02.020848       1 nfd-master.go:170] Node Feature Discovery Master v0.10.1
I0125 01:54:02.021405       1 nfd-master.go:174] NodeName: "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 01:54:02.021418       1 nfd-master.go:185] starting nfd LabelRule controller
I0125 01:54:02.049822       1 nfd-master.go:226] gRPC server serving on port: 8080
I0125 01:54:13.799770       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 01:54:52.274412       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 01:55:13.863026       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 01:55:52.341339       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 01:56:13.909377       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 01:56:52.518845       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 01:57:13.941468       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 01:57:52.542895       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 01:58:13.969465       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 01:58:52.569786       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 01:59:13.996833       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 01:59:52.593752       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:00:14.027415       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:00:52.618449       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:01:14.062015       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:01:52.647397       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:02:14.092478       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:02:52.671389       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:03:14.118042       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:03:52.697770       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:04:14.145041       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:04:52.721953       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:05:14.179417       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:05:52.751522       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:06:14.205104       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:06:52.774416       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:07:14.231272       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:07:52.799744       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:08:14.258334       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:08:52.827811       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:09:14.287055       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:09:52.862207       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:10:14.313237       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:10:52.885862       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:11:14.342255       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:11:52.910149       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:12:14.369921       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:12:52.935491       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:13:14.395452       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:13:52.963304       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:14:14.421117       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:14:52.987600       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:15:14.449399       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:15:53.010952       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:16:14.474115       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:16:53.036015       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:17:14.501369       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:17:53.060327       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:18:14.533389       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:18:53.083971       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:19:14.558948       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:19:53.107299       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"
I0125 02:20:14.583191       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
I0125 02:20:53.132559       1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8"

Logs for pod gpu-operator-node-feature-discovery-worker-bbnrb:
I0125 01:54:52.247172       1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1
I0125 01:54:52.247238       1 nfd-worker.go:156] NodeName: 'capz-e2e-plb7mq-gpu-md-0-fngb8'
I0125 01:54:52.247645       1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed
I0125 01:54:52.247718       1 nfd-worker.go:461] worker (re-)configuration successfully completed
I0125 01:54:52.247763       1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ...
I0125 01:54:52.247824       1 component.go:36] [core]parsed scheme: ""
I0125 01:54:52.247846       1 component.go:36] [core]scheme "" not registered, fallback to default scheme
I0125 01:54:52.247879       1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080  <nil> 0 <nil>}] <nil> <nil>}
I0125 01:54:52.247905       1 component.go:36] [core]ClientConn switching balancer to "pick_first"
I0125 01:54:52.247922       1 component.go:36] [core]Channel switches to new LB policy "pick_first"
I0125 01:54:52.247952       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 01:54:52.247994       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 01:54:52.248364       1 component.go:36] [core]Channel Connectivity change to CONNECTING
I0125 01:54:52.251072       1 component.go:36] [core]Subchannel Connectivity change to READY
I0125 01:54:52.251092       1 component.go:36] [core]Channel Connectivity change to READY
I0125 01:54:52.258965       1 nfd-worker.go:472] starting feature discovery...
I0125 01:54:52.259072       1 nfd-worker.go:484] feature discovery completed
I0125 01:54:52.259082       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 01:55:52.329939       1 nfd-worker.go:472] starting feature discovery...
I0125 01:55:52.330050       1 nfd-worker.go:484] feature discovery completed
I0125 01:55:52.330063       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 01:56:52.508027       1 nfd-worker.go:472] starting feature discovery...
I0125 01:56:52.508138       1 nfd-worker.go:484] feature discovery completed
I0125 01:56:52.508149       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 01:57:52.532066       1 nfd-worker.go:472] starting feature discovery...
I0125 01:57:52.532206       1 nfd-worker.go:484] feature discovery completed
I0125 01:57:52.532219       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 01:58:52.556106       1 nfd-worker.go:472] starting feature discovery...
I0125 01:58:52.556219       1 nfd-worker.go:484] feature discovery completed
I0125 01:58:52.556231       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 01:59:52.582365       1 nfd-worker.go:472] starting feature discovery...
I0125 01:59:52.582472       1 nfd-worker.go:484] feature discovery completed
I0125 01:59:52.582484       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:00:52.606258       1 nfd-worker.go:472] starting feature discovery...
I0125 02:00:52.606368       1 nfd-worker.go:484] feature discovery completed
I0125 02:00:52.606382       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:01:52.631045       1 nfd-worker.go:472] starting feature discovery...
I0125 02:01:52.631158       1 nfd-worker.go:484] feature discovery completed
I0125 02:01:52.631170       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:02:52.659479       1 nfd-worker.go:472] starting feature discovery...
I0125 02:02:52.659596       1 nfd-worker.go:484] feature discovery completed
I0125 02:02:52.659624       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:03:52.684727       1 nfd-worker.go:472] starting feature discovery...
I0125 02:03:52.684835       1 nfd-worker.go:484] feature discovery completed
I0125 02:03:52.684849       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:04:52.710223       1 nfd-worker.go:472] starting feature discovery...
I0125 02:04:52.710333       1 nfd-worker.go:484] feature discovery completed
I0125 02:04:52.710345       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:05:52.741106       1 nfd-worker.go:472] starting feature discovery...
I0125 02:05:52.741385       1 nfd-worker.go:484] feature discovery completed
I0125 02:05:52.741400       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:06:52.763663       1 nfd-worker.go:472] starting feature discovery...
I0125 02:06:52.763773       1 nfd-worker.go:484] feature discovery completed
I0125 02:06:52.763785       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:07:52.788477       1 nfd-worker.go:472] starting feature discovery...
I0125 02:07:52.788590       1 nfd-worker.go:484] feature discovery completed
I0125 02:07:52.788601       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:08:52.816571       1 nfd-worker.go:472] starting feature discovery...
I0125 02:08:52.816682       1 nfd-worker.go:484] feature discovery completed
I0125 02:08:52.816694       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:09:52.851208       1 nfd-worker.go:472] starting feature discovery...
I0125 02:09:52.851327       1 nfd-worker.go:484] feature discovery completed
I0125 02:09:52.851341       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:10:52.874493       1 nfd-worker.go:472] starting feature discovery...
I0125 02:10:52.874604       1 nfd-worker.go:484] feature discovery completed
I0125 02:10:52.874616       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:11:52.898865       1 nfd-worker.go:472] starting feature discovery...
I0125 02:11:52.898974       1 nfd-worker.go:484] feature discovery completed
I0125 02:11:52.898986       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:12:52.924415       1 nfd-worker.go:472] starting feature discovery...
I0125 02:12:52.924533       1 nfd-worker.go:484] feature discovery completed
I0125 02:12:52.924545       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:13:52.949408       1 nfd-worker.go:472] starting feature discovery...
I0125 02:13:52.949521       1 nfd-worker.go:484] feature discovery completed
I0125 02:13:52.949533       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:14:52.975528       1 nfd-worker.go:472] starting feature discovery...
I0125 02:14:52.975650       1 nfd-worker.go:484] feature discovery completed
I0125 02:14:52.975663       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:15:52.999586       1 nfd-worker.go:472] starting feature discovery...
I0125 02:15:52.999708       1 nfd-worker.go:484] feature discovery completed
I0125 02:15:52.999720       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:16:53.024352       1 nfd-worker.go:472] starting feature discovery...
I0125 02:16:53.024466       1 nfd-worker.go:484] feature discovery completed
I0125 02:16:53.024478       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:17:53.048583       1 nfd-worker.go:472] starting feature discovery...
I0125 02:17:53.048692       1 nfd-worker.go:484] feature discovery completed
I0125 02:17:53.048704       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:18:53.073683       1 nfd-worker.go:472] starting feature discovery...
I0125 02:18:53.073791       1 nfd-worker.go:484] feature discovery completed
I0125 02:18:53.073803       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:19:53.096913       1 nfd-worker.go:472] starting feature discovery...
I0125 02:19:53.097025       1 nfd-worker.go:484] feature discovery completed
I0125 02:19:53.097037       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:20:53.120457       1 nfd-worker.go:472] starting feature discovery...
I0125 02:20:53.120645       1 nfd-worker.go:484] feature discovery completed
I0125 02:20:53.120662       1 nfd-worker.go:565] sending labeling request to nfd-master

Logs for pod gpu-operator-node-feature-discovery-worker-n2jgl:
I0125 01:54:13.715272       1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1
I0125 01:54:13.715358       1 nfd-worker.go:156] NodeName: 'capz-e2e-plb7mq-gpu-control-plane-fzzxp'
I0125 01:54:13.716430       1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed
I0125 01:54:13.717771       1 nfd-worker.go:461] worker (re-)configuration successfully completed
I0125 01:54:13.718410       1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ...
I0125 01:54:13.718528       1 component.go:36] [core]parsed scheme: ""
I0125 01:54:13.718543       1 component.go:36] [core]scheme "" not registered, fallback to default scheme
I0125 01:54:13.718622       1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080  <nil> 0 <nil>}] <nil> <nil>}
I0125 01:54:13.718639       1 component.go:36] [core]ClientConn switching balancer to "pick_first"
I0125 01:54:13.718643       1 component.go:36] [core]Channel switches to new LB policy "pick_first"
I0125 01:54:13.718863       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 01:54:13.718967       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 01:54:13.719998       1 component.go:36] [core]Channel Connectivity change to CONNECTING
I0125 01:54:13.744678       1 component.go:36] [core]Subchannel Connectivity change to READY
I0125 01:54:13.744702       1 component.go:36] [core]Channel Connectivity change to READY
I0125 01:54:13.772659       1 nfd-worker.go:472] starting feature discovery...
I0125 01:54:13.783953       1 nfd-worker.go:484] feature discovery completed
I0125 01:54:13.784103       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 01:55:13.845054       1 nfd-worker.go:472] starting feature discovery...
I0125 01:55:13.847082       1 nfd-worker.go:484] feature discovery completed
I0125 01:55:13.847177       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 01:56:13.895638       1 nfd-worker.go:472] starting feature discovery...
I0125 01:56:13.895969       1 nfd-worker.go:484] feature discovery completed
I0125 01:56:13.895992       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 01:57:13.928066       1 nfd-worker.go:472] starting feature discovery...
I0125 01:57:13.928347       1 nfd-worker.go:484] feature discovery completed
I0125 01:57:13.928363       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 01:58:13.958858       1 nfd-worker.go:472] starting feature discovery...
I0125 01:58:13.959137       1 nfd-worker.go:484] feature discovery completed
I0125 01:58:13.959152       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 01:59:13.985199       1 nfd-worker.go:472] starting feature discovery...
I0125 01:59:13.985505       1 nfd-worker.go:484] feature discovery completed
I0125 01:59:13.985521       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:00:14.014534       1 nfd-worker.go:472] starting feature discovery...
I0125 02:00:14.014886       1 nfd-worker.go:484] feature discovery completed
I0125 02:00:14.014902       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:01:14.047738       1 nfd-worker.go:472] starting feature discovery...
I0125 02:01:14.047956       1 nfd-worker.go:484] feature discovery completed
I0125 02:01:14.047972       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:02:14.082123       1 nfd-worker.go:472] starting feature discovery...
I0125 02:02:14.082268       1 nfd-worker.go:484] feature discovery completed
I0125 02:02:14.082282       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:03:14.105926       1 nfd-worker.go:472] starting feature discovery...
I0125 02:03:14.106232       1 nfd-worker.go:484] feature discovery completed
I0125 02:03:14.106247       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:04:14.134165       1 nfd-worker.go:472] starting feature discovery...
I0125 02:04:14.134453       1 nfd-worker.go:484] feature discovery completed
I0125 02:04:14.134468       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:05:14.162317       1 nfd-worker.go:472] starting feature discovery...
I0125 02:05:14.162774       1 nfd-worker.go:484] feature discovery completed
I0125 02:05:14.162828       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:06:14.191808       1 nfd-worker.go:472] starting feature discovery...
I0125 02:06:14.192133       1 nfd-worker.go:484] feature discovery completed
I0125 02:06:14.192146       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:07:14.220460       1 nfd-worker.go:472] starting feature discovery...
I0125 02:07:14.220678       1 nfd-worker.go:484] feature discovery completed
I0125 02:07:14.220693       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:08:14.245131       1 nfd-worker.go:472] starting feature discovery...
I0125 02:08:14.245541       1 nfd-worker.go:484] feature discovery completed
I0125 02:08:14.245774       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:09:14.274246       1 nfd-worker.go:472] starting feature discovery...
I0125 02:09:14.274558       1 nfd-worker.go:484] feature discovery completed
I0125 02:09:14.274573       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:10:14.302971       1 nfd-worker.go:472] starting feature discovery...
I0125 02:10:14.303140       1 nfd-worker.go:484] feature discovery completed
I0125 02:10:14.303153       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:11:14.326106       1 nfd-worker.go:472] starting feature discovery...
I0125 02:11:14.326288       1 nfd-worker.go:484] feature discovery completed
I0125 02:11:14.326338       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:12:14.359646       1 nfd-worker.go:472] starting feature discovery...
I0125 02:12:14.359920       1 nfd-worker.go:484] feature discovery completed
I0125 02:12:14.359936       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:13:14.383712       1 nfd-worker.go:472] starting feature discovery...
I0125 02:13:14.383857       1 nfd-worker.go:484] feature discovery completed
I0125 02:13:14.383871       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:14:14.410611       1 nfd-worker.go:472] starting feature discovery...
I0125 02:14:14.410924       1 nfd-worker.go:484] feature discovery completed
I0125 02:14:14.410941       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:15:14.435204       1 nfd-worker.go:472] starting feature discovery...
I0125 02:15:14.435616       1 nfd-worker.go:484] feature discovery completed
I0125 02:15:14.435725       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:16:14.463883       1 nfd-worker.go:472] starting feature discovery...
I0125 02:16:14.464023       1 nfd-worker.go:484] feature discovery completed
I0125 02:16:14.464032       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:17:14.488852       1 nfd-worker.go:472] starting feature discovery...
I0125 02:17:14.489001       1 nfd-worker.go:484] feature discovery completed
I0125 02:17:14.489015       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:18:14.521571       1 nfd-worker.go:472] starting feature discovery...
I0125 02:18:14.521874       1 nfd-worker.go:484] feature discovery completed
I0125 02:18:14.521889       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:19:14.548354       1 nfd-worker.go:472] starting feature discovery...
I0125 02:19:14.548540       1 nfd-worker.go:484] feature discovery completed
I0125 02:19:14.548555       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 02:20:14.571647       1 nfd-worker.go:472] starting feature discovery...
I0125 02:20:14.572137       1 nfd-worker.go:484] feature discovery completed
I0125 02:20:14.572158       1 nfd-worker.go:565] sending labeling request to nfd-master

Expected
    <bool>: false
to be true
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/25/23 02:21:08.678

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Find gpu-operator-node-feature-discovery-master-77bc558fdc-zd7rh mentions in log files | View test history on testgrid


Show 26 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 620 lines ...
------------------------------
• [944.994 seconds]
Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:321

  Captured StdOut/StdErr Output >>
  2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-q8kecj):namespaces "capz-e2e-q8kecj" not found
  cluster.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar-control-plane created
  machinedeployment.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar-md-0 created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar-md-0 created
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar-md-0 created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine capz-e2e-q8kecj-flatcar-control-plane-mnjxv, Cluster capz-e2e-q8kecj/capz-e2e-q8kecj-flatcar: [dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:54292->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:54308->20.242.177.37:22: read: connection reset by peer]
  Failed to get logs for Machine capz-e2e-q8kecj-flatcar-md-0-7b64786657-5qk2d, Cluster capz-e2e-q8kecj/capz-e2e-q8kecj-flatcar: [dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41644->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41648->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41658->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41652->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41654->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41650->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41660->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41656->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41664->20.242.177.37:22: read: connection reset by peer]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Wed, 25 Jan 2023 01:46:28 UTC on Ginkgo node 4 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating namespace "capz-e2e-q8kecj" for hosting the cluster @ 01/25/23 01:46:28.22
  Jan 25 01:46:28.220: INFO: starting to create namespace for hosting the "capz-e2e-q8kecj" test spec
... skipping 157 lines ...
------------------------------
• [971.154 seconds]
Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:573

  Captured StdOut/StdErr Output >>
  2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-6fan57):namespaces "capz-e2e-6fan57" not found
  cluster.cluster.x-k8s.io/capz-e2e-6fan57-flex created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-6fan57-flex created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-6fan57-flex-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-6fan57-flex-control-plane created
  machinepool.cluster.x-k8s.io/capz-e2e-6fan57-flex-mp-0 created
  azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-6fan57-flex-mp-0 created
... skipping 2 lines ...

  felixconfiguration.crd.projectcalico.org/default configured

  W0125 01:55:12.184736   36968 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  2023/01/25 01:55:52 [DEBUG] GET http://20.124.140.220
  W0125 01:56:26.159663   36968 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  Failed to get logs for MachinePool capz-e2e-6fan57-flex-mp-0, Cluster capz-e2e-6fan57/capz-e2e-6fan57-flex: Unable to collect VMSS Boot Diagnostic logs: failed to parse resource id: parsing failed for /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-6fan57-flex/providers/Microsoft.Compute. Invalid resource Id format
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Wed, 25 Jan 2023 01:46:28 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating namespace "capz-e2e-6fan57" for hosting the cluster @ 01/25/23 01:46:28.228
  Jan 25 01:46:28.228: INFO: starting to create namespace for hosting the "capz-e2e-6fan57" test spec
... skipping 229 lines ...
------------------------------
• [1290.217 seconds]
Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:830

  Captured StdOut/StdErr Output >>
  2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-3zij53):namespaces "capz-e2e-3zij53" not found
  cluster.cluster.x-k8s.io/capz-e2e-3zij53-dual-stack created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-3zij53-dual-stack created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-3zij53-dual-stack-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-3zij53-dual-stack-control-plane created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created
  machinedeployment.cluster.x-k8s.io/capz-e2e-3zij53-dual-stack-md-0 created
... skipping 330 lines ...
------------------------------
• [1294.125 seconds]
Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:637

  Captured StdOut/StdErr Output >>
  2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-pb2ieg):namespaces "capz-e2e-pb2ieg" not found
  cluster.cluster.x-k8s.io/capz-e2e-pb2ieg-oot created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-pb2ieg-oot created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-pb2ieg-oot-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-pb2ieg-oot-control-plane created
  machinedeployment.cluster.x-k8s.io/capz-e2e-pb2ieg-oot-md-0 created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-pb2ieg-oot-md-0 created
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-pb2ieg-oot-md-0 created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created

  felixconfiguration.crd.projectcalico.org/default configured

  W0125 01:56:00.455064   36990 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  2023/01/25 01:57:51 [DEBUG] GET http://20.124.141.23
  2023/01/25 01:58:21 [ERR] GET http://20.124.141.23 request failed: Get "http://20.124.141.23": dial tcp 20.124.141.23:80: i/o timeout
  2023/01/25 01:58:21 [DEBUG] GET http://20.124.141.23: retrying in 1s (4 left)
  2023/01/25 01:58:52 [ERR] GET http://20.124.141.23 request failed: Get "http://20.124.141.23": dial tcp 20.124.141.23:80: i/o timeout
  2023/01/25 01:58:52 [DEBUG] GET http://20.124.141.23: retrying in 2s (3 left)
  W0125 01:59:30.602672   36990 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Wed, 25 Jan 2023 01:46:28 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
... skipping 275 lines ...
------------------------------
• [1370.390 seconds]
Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:906

  Captured StdOut/StdErr Output >>
  2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-yq798v):namespaces "capz-e2e-yq798v" not found
  clusterclass.cluster.x-k8s.io/ci-default created
  kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/ci-default-kubeadm-control-plane created
  azureclustertemplate.infrastructure.cluster.x-k8s.io/ci-default-azure-cluster created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-control-plane created
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker created
... skipping 5 lines ...
  clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created
  configmap/cni-capz-e2e-yq798v-cc-calico-windows created
  configmap/csi-proxy-addon created

  felixconfiguration.crd.projectcalico.org/default created

  Failed to get logs for Machine capz-e2e-yq798v-cc-2dmkx-v56dj, Cluster capz-e2e-yq798v/capz-e2e-yq798v-cc: dialing public load balancer at capz-e2e-yq798v-cc-eb500a1.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  Failed to get logs for Machine capz-e2e-yq798v-cc-md-0-zj6fk-f866858cb-h8c6m, Cluster capz-e2e-yq798v/capz-e2e-yq798v-cc: dialing public load balancer at capz-e2e-yq798v-cc-eb500a1.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  Failed to get logs for Machine capz-e2e-yq798v-cc-md-win-6p6hs-5ff7b95785-84zrz, Cluster capz-e2e-yq798v/capz-e2e-yq798v-cc: dialing public load balancer at capz-e2e-yq798v-cc-eb500a1.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Wed, 25 Jan 2023 01:46:28 UTC on Ginkgo node 2 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating namespace "capz-e2e-yq798v" for hosting the cluster @ 01/25/23 01:46:28.232
  Jan 25 01:46:28.232: INFO: starting to create namespace for hosting the "capz-e2e-yq798v" test spec
... skipping 183 lines ...
  Jan 25 02:03:20.179: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-6tf6k, container azuredisk
  Jan 25 02:03:20.179: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-6tf6k
  Jan 25 02:03:20.228: INFO: Fetching kube-system pod logs took 545.148328ms
  Jan 25 02:03:20.228: INFO: Dumping workload cluster capz-e2e-yq798v/capz-e2e-yq798v-cc Azure activity log
  Jan 25 02:03:20.229: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-n2t58, container tigera-operator
  Jan 25 02:03:20.229: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-n2t58
  Jan 25 02:03:20.251: INFO: Error fetching activity logs for cluster capz-e2e-yq798v-cc in namespace capz-e2e-yq798v.  Not able to find the AzureManagedControlPlane on the management cluster: azuremanagedcontrolplanes.infrastructure.cluster.x-k8s.io "capz-e2e-yq798v-cc" not found
  Jan 25 02:03:20.251: INFO: Fetching activity logs took 22.634135ms
  Jan 25 02:03:20.251: INFO: Dumping all the Cluster API resources in the "capz-e2e-yq798v" namespace
  Jan 25 02:03:20.625: INFO: Deleting all clusters in the capz-e2e-yq798v namespace
  STEP: Deleting cluster capz-e2e-yq798v-cc @ 01/25/23 02:03:20.647
  INFO: Waiting for the Cluster capz-e2e-yq798v/capz-e2e-yq798v-cc to be deleted
  STEP: Waiting for cluster capz-e2e-yq798v-cc to be deleted @ 01/25/23 02:03:20.664
... skipping 5 lines ...
  << Timeline
------------------------------
[SynchronizedAfterSuite] PASSED [0.000 seconds]
[SynchronizedAfterSuite] 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116
------------------------------
• [FAILED] [2595.718 seconds]
Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] [It] with a single control plane node and 1 node
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506

  Captured StdOut/StdErr Output >>
  2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-plb7mq):namespaces "capz-e2e-plb7mq" not found
  cluster.cluster.x-k8s.io/capz-e2e-plb7mq-gpu serverside-applied
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-plb7mq-gpu serverside-applied
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-control-plane serverside-applied
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-control-plane serverside-applied
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp serverside-applied
  machinedeployment.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-md-0 serverside-applied
... skipping 109 lines ...
  STEP: Verifying specified VM extensions are created on Azure @ 01/25/23 01:56:07.304
  STEP: Retrieving all machine pools from the machine template spec @ 01/25/23 01:56:08.163
  Jan 25 01:56:08.163: INFO: Listing machine pools in namespace capz-e2e-plb7mq with label cluster.x-k8s.io/cluster-name=capz-e2e-plb7mq-gpu
  STEP: Running a GPU-based calculation @ 01/25/23 01:56:08.167
  STEP: creating a Kubernetes client to the workload cluster @ 01/25/23 01:56:08.167
  STEP: Waiting for a node to have an "nvidia.com/gpu" allocatable resource @ 01/25/23 01:56:08.194
  [FAILED] in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/25/23 02:21:08.678
  Jan 25 02:21:08.678: INFO: FAILED!
  Jan 25 02:21:08.678: INFO: Cleaning up after "Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node" spec
  STEP: Dumping logs from the "capz-e2e-plb7mq-gpu" workload cluster @ 01/25/23 02:21:08.678
  Jan 25 02:21:08.678: INFO: Dumping workload cluster capz-e2e-plb7mq/capz-e2e-plb7mq-gpu logs
  Jan 25 02:21:08.738: INFO: Collecting logs for Linux node capz-e2e-plb7mq-gpu-control-plane-fzzxp in cluster capz-e2e-plb7mq-gpu in namespace capz-e2e-plb7mq

  Jan 25 02:21:31.673: INFO: Collecting boot logs for AzureMachine capz-e2e-plb7mq-gpu-control-plane-fzzxp
... skipping 74 lines ...
  INFO: Deleting namespace capz-e2e-plb7mq
  Jan 25 02:28:30.647: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster"
  STEP: Redacting sensitive information from logs @ 01/25/23 02:28:31.183
  INFO: "with a single control plane node and 1 node" started at Wed, 25 Jan 2023 02:29:43 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  << Timeline

  [FAILED] Timed out after 1500.001s.

  Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-zd7rh:
  I0125 01:54:02.020848       1 nfd-master.go:170] Node Feature Discovery Master v0.10.1
  I0125 01:54:02.021405       1 nfd-master.go:174] NodeName: "capz-e2e-plb7mq-gpu-control-plane-fzzxp"
  I0125 01:54:02.021418       1 nfd-master.go:185] starting nfd LabelRule controller
  I0125 01:54:02.049822       1 nfd-master.go:226] gRPC server serving on port: 8080
... skipping 267 lines ...
------------------------------
• [3752.726 seconds]
Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156

  Captured StdOut/StdErr Output >>
  2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-puwq3y):namespaces "capz-e2e-puwq3y" not found
  cluster.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet-control-plane created
  machinedeployment.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet-md-0 created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet-md-0 created
... skipping 247 lines ...
  Jan 25 02:42:28.678: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-7wxp8, container node-driver-registrar
  Jan 25 02:42:28.678: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-ff6zq
  Jan 25 02:42:28.707: INFO: Fetching kube-system pod logs took 528.109391ms
  Jan 25 02:42:28.707: INFO: Dumping workload cluster capz-e2e-puwq3y/capz-e2e-puwq3y-public-custom-vnet Azure activity log
  Jan 25 02:42:28.707: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-pw9gs, container tigera-operator
  Jan 25 02:42:28.707: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-pw9gs
  Jan 25 02:42:35.554: INFO: Got error while iterating over activity logs for resource group capz-e2e-puwq3y-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure responding to next results request: StatusCode=404 -- Original Error: autorest/azure: error response cannot be parsed: {"<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">\r\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\r\n<head>\r\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=iso-8859-1\"/>\r\n<title>404 - File or directory not found.</title>\r\n<style type=\"text/css\">\r\n<!--\r\nbody{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;}\r\nfieldset{padding:0 15px 10px 15px;} \r\nh1{font-size:2.4em;margin:0;color:#FFF;}\r\nh2{font-si" '\x00' '\x00'} error: invalid character '<' looking for beginning of value
  Jan 25 02:42:35.554: INFO: Fetching activity logs took 6.847563801s
  Jan 25 02:42:35.554: INFO: Dumping all the Cluster API resources in the "capz-e2e-puwq3y" namespace
  Jan 25 02:42:35.922: INFO: Deleting all clusters in the capz-e2e-puwq3y namespace
  STEP: Deleting cluster capz-e2e-puwq3y-public-custom-vnet @ 01/25/23 02:42:35.945
  INFO: Waiting for the Cluster capz-e2e-puwq3y/capz-e2e-puwq3y-public-custom-vnet to be deleted
  STEP: Waiting for cluster capz-e2e-puwq3y-public-custom-vnet to be deleted @ 01/25/23 02:42:35.969
  INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-749ff5bffd-ntb87, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6f7b75f796-ch5ss, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-687b6fd9bc-7n8c7, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-669bd95bbb-vhkls, container manager: http2: client connection lost
  Jan 25 02:45:36.068: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec
  INFO: Deleting namespace capz-e2e-puwq3y
  Jan 25 02:45:36.087: INFO: Running additional cleanup for the "create-workload-cluster" test spec
  Jan 25 02:45:36.087: INFO: deleting an existing virtual network "custom-vnet"
  Jan 25 02:45:46.877: INFO: deleting an existing route table "node-routetable"
  Jan 25 02:45:49.241: INFO: deleting an existing network security group "node-nsg"
... skipping 16 lines ...
[ReportAfterSuite] PASSED [0.016 seconds]
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------

Summarizing 1 Failure:
  [FAIL] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] [It] with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80

Ran 7 of 27 Specs in 3884.592 seconds
FAIL! -- 6 Passed | 1 Failed | 0 Pending | 20 Skipped

You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:429
... skipping 85 lines ...
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:285
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:429

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.7.0

--- FAIL: TestE2E (2726.24s)
FAIL


Ginkgo ran 1 suite in 1h7m7.771356597s

Test Suite Failed
make[1]: *** [Makefile:654: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:663: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...