This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjackfrancis: Update default k8s version to v1.25 for testing
ResultFAILURE
Tests 1 failed / 26 succeeded
Started2023-01-26 01:37
Elapsed56m48s
Revisionaa4b89f70338b5bf172b792cbe9a26a0f73595d6
Refs 3088

Test Failures


capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node 39m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
[FAILED] Timed out after 1500.001s.

Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-fxdgv:
I0126 01:52:29.581256       1 nfd-master.go:170] Node Feature Discovery Master v0.10.1
I0126 01:52:29.581322       1 nfd-master.go:174] NodeName: "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 01:52:29.581327       1 nfd-master.go:185] starting nfd LabelRule controller
I0126 01:52:29.827008       1 nfd-master.go:226] gRPC server serving on port: 8080
I0126 01:52:46.854460       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 01:53:46.914015       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 01:54:01.567781       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 01:54:46.960016       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 01:55:01.604215       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 01:55:46.985981       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 01:56:01.628835       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 01:56:47.011567       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 01:57:01.652262       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 01:57:47.033509       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 01:58:01.674804       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 01:58:47.062950       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 01:59:01.700625       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 01:59:47.087801       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:00:01.727691       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:00:47.115294       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:01:01.750549       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:01:47.138253       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:02:01.774538       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:02:47.164054       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:03:01.798346       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:03:47.188083       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:04:01.820465       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:04:47.212547       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:05:01.844581       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:05:47.233843       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:06:01.866999       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:06:47.259195       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:07:01.889723       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:07:47.284755       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:08:01.915054       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:08:47.309685       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:09:01.937387       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:09:47.337318       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:10:01.961652       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:10:47.363329       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:11:01.985530       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:11:47.386931       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:12:02.009540       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:12:47.409970       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:13:02.031852       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:13:47.434985       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:14:02.055413       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:14:47.466234       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:15:02.079197       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:15:47.495934       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:16:02.103334       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:16:47.517887       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:17:02.125295       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:17:47.544390       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:18:02.151313       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"
I0126 02:18:47.567800       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-control-plane-ktf9t"
I0126 02:19:02.174398       1 nfd-master.go:423] received labeling request for node "capz-e2e-28qbte-gpu-md-0-5g4qp"

Logs for pod gpu-operator-node-feature-discovery-worker-7cppp:
I0126 01:54:01.537461       1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1
I0126 01:54:01.537540       1 nfd-worker.go:156] NodeName: 'capz-e2e-28qbte-gpu-md-0-5g4qp'
I0126 01:54:01.538048       1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed
I0126 01:54:01.538130       1 nfd-worker.go:461] worker (re-)configuration successfully completed
I0126 01:54:01.538172       1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ...
I0126 01:54:01.538272       1 component.go:36] [core]parsed scheme: ""
I0126 01:54:01.538287       1 component.go:36] [core]scheme "" not registered, fallback to default scheme
I0126 01:54:01.538321       1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080  <nil> 0 <nil>}] <nil> <nil>}
I0126 01:54:01.538348       1 component.go:36] [core]ClientConn switching balancer to "pick_first"
I0126 01:54:01.538352       1 component.go:36] [core]Channel switches to new LB policy "pick_first"
I0126 01:54:01.538378       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0126 01:54:01.538409       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0126 01:54:01.538645       1 component.go:36] [core]Channel Connectivity change to CONNECTING
I0126 01:54:01.544447       1 component.go:36] [core]Subchannel Connectivity change to READY
I0126 01:54:01.544467       1 component.go:36] [core]Channel Connectivity change to READY
I0126 01:54:01.553478       1 nfd-worker.go:472] starting feature discovery...
I0126 01:54:01.553602       1 nfd-worker.go:484] feature discovery completed
I0126 01:54:01.553615       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:55:01.592838       1 nfd-worker.go:472] starting feature discovery...
I0126 01:55:01.592951       1 nfd-worker.go:484] feature discovery completed
I0126 01:55:01.592964       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:56:01.618285       1 nfd-worker.go:472] starting feature discovery...
I0126 01:56:01.618396       1 nfd-worker.go:484] feature discovery completed
I0126 01:56:01.618408       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:57:01.642374       1 nfd-worker.go:472] starting feature discovery...
I0126 01:57:01.642485       1 nfd-worker.go:484] feature discovery completed
I0126 01:57:01.642497       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:58:01.665490       1 nfd-worker.go:472] starting feature discovery...
I0126 01:58:01.665607       1 nfd-worker.go:484] feature discovery completed
I0126 01:58:01.665620       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:59:01.688559       1 nfd-worker.go:472] starting feature discovery...
I0126 01:59:01.688669       1 nfd-worker.go:484] feature discovery completed
I0126 01:59:01.688682       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:00:01.716551       1 nfd-worker.go:472] starting feature discovery...
I0126 02:00:01.716667       1 nfd-worker.go:484] feature discovery completed
I0126 02:00:01.716679       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:01:01.740134       1 nfd-worker.go:472] starting feature discovery...
I0126 02:01:01.740262       1 nfd-worker.go:484] feature discovery completed
I0126 02:01:01.740275       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:02:01.764056       1 nfd-worker.go:472] starting feature discovery...
I0126 02:02:01.764181       1 nfd-worker.go:484] feature discovery completed
I0126 02:02:01.764195       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:03:01.788999       1 nfd-worker.go:472] starting feature discovery...
I0126 02:03:01.789109       1 nfd-worker.go:484] feature discovery completed
I0126 02:03:01.789122       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:04:01.810655       1 nfd-worker.go:472] starting feature discovery...
I0126 02:04:01.810780       1 nfd-worker.go:484] feature discovery completed
I0126 02:04:01.810793       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:05:01.834617       1 nfd-worker.go:472] starting feature discovery...
I0126 02:05:01.834880       1 nfd-worker.go:484] feature discovery completed
I0126 02:05:01.834896       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:06:01.856845       1 nfd-worker.go:472] starting feature discovery...
I0126 02:06:01.857059       1 nfd-worker.go:484] feature discovery completed
I0126 02:06:01.857076       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:07:01.879591       1 nfd-worker.go:472] starting feature discovery...
I0126 02:07:01.879704       1 nfd-worker.go:484] feature discovery completed
I0126 02:07:01.879717       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:08:01.904562       1 nfd-worker.go:472] starting feature discovery...
I0126 02:08:01.904673       1 nfd-worker.go:484] feature discovery completed
I0126 02:08:01.904685       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:09:01.927498       1 nfd-worker.go:472] starting feature discovery...
I0126 02:09:01.927610       1 nfd-worker.go:484] feature discovery completed
I0126 02:09:01.927622       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:10:01.951318       1 nfd-worker.go:472] starting feature discovery...
I0126 02:10:01.951434       1 nfd-worker.go:484] feature discovery completed
I0126 02:10:01.951446       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:11:01.975127       1 nfd-worker.go:472] starting feature discovery...
I0126 02:11:01.975238       1 nfd-worker.go:484] feature discovery completed
I0126 02:11:01.975250       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:12:01.999285       1 nfd-worker.go:472] starting feature discovery...
I0126 02:12:01.999404       1 nfd-worker.go:484] feature discovery completed
I0126 02:12:01.999417       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:13:02.022168       1 nfd-worker.go:472] starting feature discovery...
I0126 02:13:02.022282       1 nfd-worker.go:484] feature discovery completed
I0126 02:13:02.022295       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:14:02.045900       1 nfd-worker.go:472] starting feature discovery...
I0126 02:14:02.046041       1 nfd-worker.go:484] feature discovery completed
I0126 02:14:02.046054       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:15:02.068466       1 nfd-worker.go:472] starting feature discovery...
I0126 02:15:02.068589       1 nfd-worker.go:484] feature discovery completed
I0126 02:15:02.068602       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:16:02.093861       1 nfd-worker.go:472] starting feature discovery...
I0126 02:16:02.093974       1 nfd-worker.go:484] feature discovery completed
I0126 02:16:02.093987       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:17:02.116020       1 nfd-worker.go:472] starting feature discovery...
I0126 02:17:02.116136       1 nfd-worker.go:484] feature discovery completed
I0126 02:17:02.116162       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:18:02.141618       1 nfd-worker.go:472] starting feature discovery...
I0126 02:18:02.141760       1 nfd-worker.go:484] feature discovery completed
I0126 02:18:02.141774       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:19:02.164280       1 nfd-worker.go:472] starting feature discovery...
I0126 02:19:02.164395       1 nfd-worker.go:484] feature discovery completed
I0126 02:19:02.164407       1 nfd-worker.go:565] sending labeling request to nfd-master

Logs for pod gpu-operator-node-feature-discovery-worker-nk8m6:
I0126 01:52:29.909664       1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1
I0126 01:52:29.909895       1 nfd-worker.go:156] NodeName: 'capz-e2e-28qbte-gpu-control-plane-ktf9t'
I0126 01:52:29.912383       1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed
I0126 01:52:29.912697       1 nfd-worker.go:461] worker (re-)configuration successfully completed
I0126 01:52:29.913060       1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ...
I0126 01:52:29.913214       1 component.go:36] [core]parsed scheme: ""
I0126 01:52:29.913223       1 component.go:36] [core]scheme "" not registered, fallback to default scheme
I0126 01:52:29.913250       1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080  <nil> 0 <nil>}] <nil> <nil>}
I0126 01:52:29.913372       1 component.go:36] [core]ClientConn switching balancer to "pick_first"
I0126 01:52:29.914001       1 component.go:36] [core]Channel switches to new LB policy "pick_first"
I0126 01:52:29.914143       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0126 01:52:29.914259       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0126 01:52:29.914343       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0126 01:52:29.929894       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.105.236.51:8080: connect: connection refused". Reconnecting...
I0126 01:52:29.930055       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0126 01:52:29.930143       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0126 01:52:30.930343       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0126 01:52:30.930452       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0126 01:52:30.930535       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0126 01:52:30.931750       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.105.236.51:8080: connect: connection refused". Reconnecting...
I0126 01:52:30.931768       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0126 01:52:30.931778       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0126 01:52:32.815656       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0126 01:52:32.815755       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0126 01:52:32.816828       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0126 01:52:32.824073       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.105.236.51:8080: connect: connection refused". Reconnecting...
I0126 01:52:32.824146       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0126 01:52:32.824180       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0126 01:52:34.881846       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0126 01:52:34.881939       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0126 01:52:34.882084       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0126 01:52:34.882793       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.105.236.51:8080: connect: connection refused". Reconnecting...
I0126 01:52:34.882812       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0126 01:52:34.882909       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0126 01:52:39.688568       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0126 01:52:39.688593       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0126 01:52:39.688672       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0126 01:52:39.689409       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.105.236.51:8080: connect: connection refused". Reconnecting...
I0126 01:52:39.689423       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0126 01:52:39.689433       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0126 01:52:46.832729       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0126 01:52:46.832755       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0126 01:52:46.833009       1 component.go:36] [core]Channel Connectivity change to CONNECTING
I0126 01:52:46.834131       1 component.go:36] [core]Subchannel Connectivity change to READY
I0126 01:52:46.834149       1 component.go:36] [core]Channel Connectivity change to READY
I0126 01:52:46.843284       1 nfd-worker.go:472] starting feature discovery...
I0126 01:52:46.843460       1 nfd-worker.go:484] feature discovery completed
I0126 01:52:46.843472       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:53:46.901464       1 nfd-worker.go:472] starting feature discovery...
I0126 01:53:46.901741       1 nfd-worker.go:484] feature discovery completed
I0126 01:53:46.901757       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:54:46.938606       1 nfd-worker.go:472] starting feature discovery...
I0126 01:54:46.939299       1 nfd-worker.go:484] feature discovery completed
I0126 01:54:46.939443       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:55:46.975136       1 nfd-worker.go:472] starting feature discovery...
I0126 01:55:46.975294       1 nfd-worker.go:484] feature discovery completed
I0126 01:55:46.975326       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:56:47.001591       1 nfd-worker.go:472] starting feature discovery...
I0126 01:56:47.001827       1 nfd-worker.go:484] feature discovery completed
I0126 01:56:47.001843       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:57:47.023516       1 nfd-worker.go:472] starting feature discovery...
I0126 01:57:47.023962       1 nfd-worker.go:484] feature discovery completed
I0126 01:57:47.023980       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:58:47.050213       1 nfd-worker.go:472] starting feature discovery...
I0126 01:58:47.050413       1 nfd-worker.go:484] feature discovery completed
I0126 01:58:47.050455       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 01:59:47.078351       1 nfd-worker.go:472] starting feature discovery...
I0126 01:59:47.078634       1 nfd-worker.go:484] feature discovery completed
I0126 01:59:47.078648       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:00:47.105301       1 nfd-worker.go:472] starting feature discovery...
I0126 02:00:47.105558       1 nfd-worker.go:484] feature discovery completed
I0126 02:00:47.105572       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:01:47.129350       1 nfd-worker.go:472] starting feature discovery...
I0126 02:01:47.129486       1 nfd-worker.go:484] feature discovery completed
I0126 02:01:47.129499       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:02:47.154268       1 nfd-worker.go:472] starting feature discovery...
I0126 02:02:47.154415       1 nfd-worker.go:484] feature discovery completed
I0126 02:02:47.154424       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:03:47.176366       1 nfd-worker.go:472] starting feature discovery...
I0126 02:03:47.176583       1 nfd-worker.go:484] feature discovery completed
I0126 02:03:47.176597       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:04:47.201740       1 nfd-worker.go:472] starting feature discovery...
I0126 02:04:47.202305       1 nfd-worker.go:484] feature discovery completed
I0126 02:04:47.202321       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:05:47.223796       1 nfd-worker.go:472] starting feature discovery...
I0126 02:05:47.223927       1 nfd-worker.go:484] feature discovery completed
I0126 02:05:47.223941       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:06:47.250254       1 nfd-worker.go:472] starting feature discovery...
I0126 02:06:47.250381       1 nfd-worker.go:484] feature discovery completed
I0126 02:06:47.250395       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:07:47.273634       1 nfd-worker.go:472] starting feature discovery...
I0126 02:07:47.273911       1 nfd-worker.go:484] feature discovery completed
I0126 02:07:47.273925       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:08:47.298350       1 nfd-worker.go:472] starting feature discovery...
I0126 02:08:47.298644       1 nfd-worker.go:484] feature discovery completed
I0126 02:08:47.298657       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:09:47.326079       1 nfd-worker.go:472] starting feature discovery...
I0126 02:09:47.326522       1 nfd-worker.go:484] feature discovery completed
I0126 02:09:47.326683       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:10:47.352878       1 nfd-worker.go:472] starting feature discovery...
I0126 02:10:47.353038       1 nfd-worker.go:484] feature discovery completed
I0126 02:10:47.353050       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:11:47.377498       1 nfd-worker.go:472] starting feature discovery...
I0126 02:11:47.377628       1 nfd-worker.go:484] feature discovery completed
I0126 02:11:47.377644       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:12:47.399507       1 nfd-worker.go:472] starting feature discovery...
I0126 02:12:47.399791       1 nfd-worker.go:484] feature discovery completed
I0126 02:12:47.399809       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:13:47.423170       1 nfd-worker.go:472] starting feature discovery...
I0126 02:13:47.423552       1 nfd-worker.go:484] feature discovery completed
I0126 02:13:47.423664       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:14:47.456496       1 nfd-worker.go:472] starting feature discovery...
I0126 02:14:47.456756       1 nfd-worker.go:484] feature discovery completed
I0126 02:14:47.456770       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:15:47.480556       1 nfd-worker.go:472] starting feature discovery...
I0126 02:15:47.480919       1 nfd-worker.go:484] feature discovery completed
I0126 02:15:47.481005       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:16:47.509109       1 nfd-worker.go:472] starting feature discovery...
I0126 02:16:47.509264       1 nfd-worker.go:484] feature discovery completed
I0126 02:16:47.509278       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:17:47.533171       1 nfd-worker.go:472] starting feature discovery...
I0126 02:17:47.533459       1 nfd-worker.go:484] feature discovery completed
I0126 02:17:47.533474       1 nfd-worker.go:565] sending labeling request to nfd-master
I0126 02:18:47.558683       1 nfd-worker.go:472] starting feature discovery...
I0126 02:18:47.558966       1 nfd-worker.go:484] feature discovery completed
I0126 02:18:47.558982       1 nfd-worker.go:565] sending labeling request to nfd-master

Expected
    <bool>: false
to be true
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/26/23 02:19:20.789

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Find gpu-operator-node-feature-discovery-master-77bc558fdc-fxdgv mentions in log files | View test history on testgrid


Show 26 Passed Tests

Show 17 Skipped Tests

Error lines from build-log.txt

... skipping 625 lines ...
------------------------------
• [822.291 seconds]
Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:321

  Captured StdOut/StdErr Output >>
  2023/01/26 01:46:32 failed trying to get namespace (capz-e2e-iahf9b):namespaces "capz-e2e-iahf9b" not found
  cluster.cluster.x-k8s.io/capz-e2e-iahf9b-flatcar created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-iahf9b-flatcar created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-iahf9b-flatcar-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-iahf9b-flatcar-control-plane created
  machinedeployment.cluster.x-k8s.io/capz-e2e-iahf9b-flatcar-md-0 created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-iahf9b-flatcar-md-0 created
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-iahf9b-flatcar-md-0 created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine capz-e2e-iahf9b-flatcar-control-plane-whdjx, Cluster capz-e2e-iahf9b/capz-e2e-iahf9b-flatcar: [dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:59394->20.14.25.243:22: read: connection reset by peer, dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:59414->20.14.25.243:22: read: connection reset by peer]
  Failed to get logs for Machine capz-e2e-iahf9b-flatcar-md-0-55655d585b-wn7kc, Cluster capz-e2e-iahf9b/capz-e2e-iahf9b-flatcar: [dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:57478->20.14.25.243:22: read: connection reset by peer, dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:57464->20.14.25.243:22: read: connection reset by peer, dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:57466->20.14.25.243:22: read: connection reset by peer, dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:57472->20.14.25.243:22: read: connection reset by peer, dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:57462->20.14.25.243:22: read: connection reset by peer, dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:57468->20.14.25.243:22: read: connection reset by peer, dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:57474->20.14.25.243:22: read: connection reset by peer, dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:57482->20.14.25.243:22: read: connection reset by peer, dialing public load balancer at capz-e2e-iahf9b-flatcar-e43b9c8d.westus3.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.117.73:57480->20.14.25.243:22: read: connection reset by peer]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Thu, 26 Jan 2023 01:46:32 UTC on Ginkgo node 8 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating namespace "capz-e2e-iahf9b" for hosting the cluster @ 01/26/23 01:46:32.927
  Jan 26 01:46:32.927: INFO: starting to create namespace for hosting the "capz-e2e-iahf9b" test spec
... skipping 157 lines ...
------------------------------
• [1011.204 seconds]
Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:573

  Captured StdOut/StdErr Output >>
  2023/01/26 01:46:32 failed trying to get namespace (capz-e2e-4i6ydp):namespaces "capz-e2e-4i6ydp" not found
  cluster.cluster.x-k8s.io/capz-e2e-4i6ydp-flex created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-4i6ydp-flex created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-4i6ydp-flex-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-4i6ydp-flex-control-plane created
  machinepool.cluster.x-k8s.io/capz-e2e-4i6ydp-flex-mp-0 created
  azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-4i6ydp-flex-mp-0 created
... skipping 2 lines ...

  felixconfiguration.crd.projectcalico.org/default created

  W0126 01:56:19.980079   36945 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  2023/01/26 01:56:50 [DEBUG] GET http://20.14.18.78
  W0126 01:57:40.061577   36945 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  Failed to get logs for MachinePool capz-e2e-4i6ydp-flex-mp-0, Cluster capz-e2e-4i6ydp/capz-e2e-4i6ydp-flex: Unable to collect VMSS Boot Diagnostic logs: failed to parse resource id: parsing failed for /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-4i6ydp-flex/providers/Microsoft.Compute. Invalid resource Id format
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Thu, 26 Jan 2023 01:46:32 UTC on Ginkgo node 2 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating namespace "capz-e2e-4i6ydp" for hosting the cluster @ 01/26/23 01:46:32.928
  Jan 26 01:46:32.928: INFO: starting to create namespace for hosting the "capz-e2e-4i6ydp" test spec
... skipping 229 lines ...
------------------------------
• [1053.335 seconds]
Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:637

  Captured StdOut/StdErr Output >>
  2023/01/26 01:46:32 failed trying to get namespace (capz-e2e-ga618t):namespaces "capz-e2e-ga618t" not found
  cluster.cluster.x-k8s.io/capz-e2e-ga618t-oot created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-ga618t-oot created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-ga618t-oot-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-ga618t-oot-control-plane created
  machinedeployment.cluster.x-k8s.io/capz-e2e-ga618t-oot-md-0 created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-ga618t-oot-md-0 created
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-ga618t-oot-md-0 created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created

  felixconfiguration.crd.projectcalico.org/default configured

  W0126 01:55:44.984600   37013 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  2023/01/26 01:56:45 [DEBUG] GET http://20.14.28.248
  2023/01/26 01:57:15 [ERR] GET http://20.14.28.248 request failed: Get "http://20.14.28.248": dial tcp 20.14.28.248:80: i/o timeout
  2023/01/26 01:57:15 [DEBUG] GET http://20.14.28.248: retrying in 1s (4 left)
  W0126 01:57:53.945025   37013 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Thu, 26 Jan 2023 01:46:32 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
... skipping 274 lines ...
------------------------------
• [1293.439 seconds]
Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:830

  Captured StdOut/StdErr Output >>
  2023/01/26 01:46:32 failed trying to get namespace (capz-e2e-48li4a):namespaces "capz-e2e-48li4a" not found
  cluster.cluster.x-k8s.io/capz-e2e-48li4a-dual-stack created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-48li4a-dual-stack created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-48li4a-dual-stack-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-48li4a-dual-stack-control-plane created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created
  machinedeployment.cluster.x-k8s.io/capz-e2e-48li4a-dual-stack-md-0 created
... skipping 331 lines ...
------------------------------
• [2058.020 seconds]
Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:906

  Captured StdOut/StdErr Output >>
  2023/01/26 01:46:32 failed trying to get namespace (capz-e2e-eebfp0):namespaces "capz-e2e-eebfp0" not found
  clusterclass.cluster.x-k8s.io/ci-default created
  kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/ci-default-kubeadm-control-plane created
  azureclustertemplate.infrastructure.cluster.x-k8s.io/ci-default-azure-cluster created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-control-plane created
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker created
... skipping 5 lines ...
  clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created
  configmap/cni-capz-e2e-eebfp0-cc-calico-windows created
  configmap/csi-proxy-addon created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine capz-e2e-eebfp0-cc-md-0-ntxld-667578b7d8-9jq7p, Cluster capz-e2e-eebfp0/capz-e2e-eebfp0-cc: dialing public load balancer at capz-e2e-eebfp0-cc-d0de0e3d.westus3.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  Failed to get logs for Machine capz-e2e-eebfp0-cc-md-win-cmrcg-777dfd8cff-wlcgq, Cluster capz-e2e-eebfp0/capz-e2e-eebfp0-cc: dialing public load balancer at capz-e2e-eebfp0-cc-d0de0e3d.westus3.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  Failed to get logs for Machine capz-e2e-eebfp0-cc-qfs4j-x9l4f, Cluster capz-e2e-eebfp0/capz-e2e-eebfp0-cc: dialing public load balancer at capz-e2e-eebfp0-cc-d0de0e3d.westus3.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Thu, 26 Jan 2023 01:46:32 UTC on Ginkgo node 5 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating namespace "capz-e2e-eebfp0" for hosting the cluster @ 01/26/23 01:46:32.929
  Jan 26 01:46:32.929: INFO: starting to create namespace for hosting the "capz-e2e-eebfp0" test spec
... skipping 189 lines ...
  Jan 26 02:13:36.293: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-hcgc6, container node-driver-registrar
  Jan 26 02:13:36.293: INFO: Collecting events for Pod kube-system/kube-proxy-windows-92xl2
  Jan 26 02:13:36.417: INFO: Fetching kube-system pod logs took 1.039210284s
  Jan 26 02:13:36.417: INFO: Dumping workload cluster capz-e2e-eebfp0/capz-e2e-eebfp0-cc Azure activity log
  Jan 26 02:13:36.418: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-p2pjm, container tigera-operator
  Jan 26 02:13:36.418: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-p2pjm
  Jan 26 02:13:36.446: INFO: Error fetching activity logs for cluster capz-e2e-eebfp0-cc in namespace capz-e2e-eebfp0.  Not able to find the AzureManagedControlPlane on the management cluster: azuremanagedcontrolplanes.infrastructure.cluster.x-k8s.io "capz-e2e-eebfp0-cc" not found
  Jan 26 02:13:36.446: INFO: Fetching activity logs took 28.754277ms
  Jan 26 02:13:36.446: INFO: Dumping all the Cluster API resources in the "capz-e2e-eebfp0" namespace
  Jan 26 02:13:36.871: INFO: Deleting all clusters in the capz-e2e-eebfp0 namespace
  STEP: Deleting cluster capz-e2e-eebfp0-cc @ 01/26/23 02:13:36.899
  INFO: Waiting for the Cluster capz-e2e-eebfp0/capz-e2e-eebfp0-cc to be deleted
  STEP: Waiting for cluster capz-e2e-eebfp0-cc to be deleted @ 01/26/23 02:13:36.923
... skipping 5 lines ...
  << Timeline
------------------------------
[SynchronizedAfterSuite] PASSED [0.000 seconds]
[SynchronizedAfterSuite] 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116
------------------------------
• [FAILED] [2375.626 seconds]
Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] [It] with a single control plane node and 1 node
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506

  Captured StdOut/StdErr Output >>
  2023/01/26 01:46:32 failed trying to get namespace (capz-e2e-28qbte):namespaces "capz-e2e-28qbte" not found
  cluster.cluster.x-k8s.io/capz-e2e-28qbte-gpu serverside-applied
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-28qbte-gpu serverside-applied
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-28qbte-gpu-control-plane serverside-applied
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-28qbte-gpu-control-plane serverside-applied
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp serverside-applied
  machinedeployment.cluster.x-k8s.io/capz-e2e-28qbte-gpu-md-0 serverside-applied
... skipping 114 lines ...
  STEP: Verifying specified VM extensions are created on Azure @ 01/26/23 01:54:19.084
  STEP: Retrieving all machine pools from the machine template spec @ 01/26/23 01:54:20.303
  Jan 26 01:54:20.303: INFO: Listing machine pools in namespace capz-e2e-28qbte with label cluster.x-k8s.io/cluster-name=capz-e2e-28qbte-gpu
  STEP: Running a GPU-based calculation @ 01/26/23 01:54:20.307
  STEP: creating a Kubernetes client to the workload cluster @ 01/26/23 01:54:20.307
  STEP: Waiting for a node to have an "nvidia.com/gpu" allocatable resource @ 01/26/23 01:54:20.328
  [FAILED] in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/26/23 02:19:20.789
  Jan 26 02:19:20.789: INFO: FAILED!
  Jan 26 02:19:20.789: INFO: Cleaning up after "Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node" spec
  STEP: Dumping logs from the "capz-e2e-28qbte-gpu" workload cluster @ 01/26/23 02:19:20.789
  Jan 26 02:19:20.789: INFO: Dumping workload cluster capz-e2e-28qbte/capz-e2e-28qbte-gpu logs
  Jan 26 02:19:20.837: INFO: Collecting logs for Linux node capz-e2e-28qbte-gpu-control-plane-ktf9t in cluster capz-e2e-28qbte-gpu in namespace capz-e2e-28qbte

  Jan 26 02:19:39.818: INFO: Collecting boot logs for AzureMachine capz-e2e-28qbte-gpu-control-plane-ktf9t
... skipping 74 lines ...
  INFO: Deleting namespace capz-e2e-28qbte
  Jan 26 02:24:38.564: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster"
  STEP: Redacting sensitive information from logs @ 01/26/23 02:24:40.231
  INFO: "with a single control plane node and 1 node" started at Thu, 26 Jan 2023 02:26:08 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  << Timeline

  [FAILED] Timed out after 1500.001s.

  Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-fxdgv:
  I0126 01:52:29.581256       1 nfd-master.go:170] Node Feature Discovery Master v0.10.1
  I0126 01:52:29.581322       1 nfd-master.go:174] NodeName: "capz-e2e-28qbte-gpu-control-plane-ktf9t"
  I0126 01:52:29.581327       1 nfd-master.go:185] starting nfd LabelRule controller
  I0126 01:52:29.827008       1 nfd-master.go:226] gRPC server serving on port: 8080
... skipping 157 lines ...
  I0126 01:52:29.913250       1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080  <nil> 0 <nil>}] <nil> <nil>}
  I0126 01:52:29.913372       1 component.go:36] [core]ClientConn switching balancer to "pick_first"
  I0126 01:52:29.914001       1 component.go:36] [core]Channel switches to new LB policy "pick_first"
  I0126 01:52:29.914143       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0126 01:52:29.914259       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0126 01:52:29.914343       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0126 01:52:29.929894       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.105.236.51:8080: connect: connection refused". Reconnecting...
  I0126 01:52:29.930055       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0126 01:52:29.930143       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0126 01:52:30.930343       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0126 01:52:30.930452       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0126 01:52:30.930535       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0126 01:52:30.931750       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.105.236.51:8080: connect: connection refused". Reconnecting...
  I0126 01:52:30.931768       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0126 01:52:30.931778       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0126 01:52:32.815656       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0126 01:52:32.815755       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0126 01:52:32.816828       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0126 01:52:32.824073       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.105.236.51:8080: connect: connection refused". Reconnecting...
  I0126 01:52:32.824146       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0126 01:52:32.824180       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0126 01:52:34.881846       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0126 01:52:34.881939       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0126 01:52:34.882084       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0126 01:52:34.882793       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.105.236.51:8080: connect: connection refused". Reconnecting...
  I0126 01:52:34.882812       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0126 01:52:34.882909       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0126 01:52:39.688568       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0126 01:52:39.688593       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0126 01:52:39.688672       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0126 01:52:39.689409       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.105.236.51:8080: connect: connection refused". Reconnecting...
  I0126 01:52:39.689423       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0126 01:52:39.689433       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0126 01:52:46.832729       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0126 01:52:46.832755       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0126 01:52:46.833009       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  I0126 01:52:46.834131       1 component.go:36] [core]Subchannel Connectivity change to READY
... skipping 99 lines ...
------------------------------
• [2751.726 seconds]
Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156

  Captured StdOut/StdErr Output >>
  2023/01/26 01:46:32 failed trying to get namespace (capz-e2e-32uycg):namespaces "capz-e2e-32uycg" not found
  cluster.cluster.x-k8s.io/capz-e2e-32uycg-public-custom-vnet created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-32uycg-public-custom-vnet created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-32uycg-public-custom-vnet-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-32uycg-public-custom-vnet-control-plane created
  machinedeployment.cluster.x-k8s.io/capz-e2e-32uycg-public-custom-vnet-md-0 created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-32uycg-public-custom-vnet-md-0 created
... skipping 249 lines ...
  Jan 26 02:27:23.831: INFO: Collecting events for Pod kube-system/kube-proxy-2z6jk
  Jan 26 02:27:23.831: INFO: Collecting events for Pod kube-system/kube-proxy-jdlf7
  Jan 26 02:27:23.894: INFO: Fetching kube-system pod logs took 1.007199803s
  Jan 26 02:27:23.894: INFO: Dumping workload cluster capz-e2e-32uycg/capz-e2e-32uycg-public-custom-vnet Azure activity log
  Jan 26 02:27:23.894: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-wq6cd, container tigera-operator
  Jan 26 02:27:23.894: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-wq6cd
  Jan 26 02:27:31.497: INFO: Got error while iterating over activity logs for resource group capz-e2e-32uycg-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure responding to next results request: StatusCode=404 -- Original Error: autorest/azure: error response cannot be parsed: {"<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">\r\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\r\n<head>\r\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=iso-8859-1\"/>\r\n<title>404 - File or directory not found.</title>\r\n<style type=\"text/css\">\r\n<!--\r\nbody{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;}\r\nfieldset{padding:0 15px 10px 15px;} \r\nh1{font-size:2.4em;margin:0;color:#FFF;}\r\nh2{font-si" '\x00' '\x00'} error: invalid character '<' looking for beginning of value
  Jan 26 02:27:31.497: INFO: Fetching activity logs took 7.602776009s
  Jan 26 02:27:31.497: INFO: Dumping all the Cluster API resources in the "capz-e2e-32uycg" namespace
  Jan 26 02:27:31.859: INFO: Deleting all clusters in the capz-e2e-32uycg namespace
  STEP: Deleting cluster capz-e2e-32uycg-public-custom-vnet @ 01/26/23 02:27:31.878
  INFO: Waiting for the Cluster capz-e2e-32uycg/capz-e2e-32uycg-public-custom-vnet to be deleted
  STEP: Waiting for cluster capz-e2e-32uycg-public-custom-vnet to be deleted @ 01/26/23 02:27:31.891
  INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-577d69cd87-57ldc, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-669bd95bbb-598mg, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6f7b75f796-npz62, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-687b6fd9bc-8rfgx, container manager: http2: client connection lost
  Jan 26 02:30:11.978: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec
  INFO: Deleting namespace capz-e2e-32uycg
  Jan 26 02:30:11.994: INFO: Running additional cleanup for the "create-workload-cluster" test spec
  Jan 26 02:30:11.994: INFO: deleting an existing virtual network "custom-vnet"
  Jan 26 02:30:23.086: INFO: deleting an existing route table "node-routetable"
  Jan 26 02:30:25.773: INFO: deleting an existing network security group "node-nsg"
... skipping 16 lines ...
[ReportAfterSuite] PASSED [0.017 seconds]
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------

Summarizing 1 Failure:
  [FAIL] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] [It] with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80

Ran 7 of 24 Specs in 2890.651 seconds
FAIL! -- 6 Passed | 1 Failed | 0 Pending | 17 Skipped

You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:426
... skipping 57 lines ...
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:285
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:426

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.7.0

--- FAIL: TestE2E (2513.15s)
FAIL

You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:426
... skipping 20 lines ...

PASS


Ginkgo ran 1 suite in 50m26.366135518s

Test Suite Failed
make[1]: *** [Makefile:654: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:663: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...