This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjackfrancis: Update default k8s version to v1.25 for testing
ResultFAILURE
Tests 1 failed / 26 succeeded
Started2023-01-25 16:58
Elapsed1h17m
Revisionaa4b89f70338b5bf172b792cbe9a26a0f73595d6
Refs 3088

Test Failures


capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node 40m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
[FAILED] Timed out after 1500.000s.

Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-wpfhl:
I0125 17:15:40.936656       1 nfd-master.go:170] Node Feature Discovery Master v0.10.1
I0125 17:15:40.937322       1 nfd-master.go:174] NodeName: "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:15:40.937638       1 nfd-master.go:185] starting nfd LabelRule controller
I0125 17:15:41.029254       1 nfd-master.go:226] gRPC server serving on port: 8080
I0125 17:15:52.542995       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:16:49.556731       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:16:52.592717       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:17:49.597238       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:17:52.628642       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:18:49.634364       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:18:52.664170       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:19:49.662100       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:19:52.695985       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:20:49.686016       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:20:52.720181       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:21:49.716738       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:21:52.746612       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:22:49.741475       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:22:52.782016       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:23:49.768996       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:23:52.809905       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:24:49.793718       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:24:52.834586       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:25:49.817640       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:25:52.859662       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:26:49.843342       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:26:52.885145       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:27:49.871178       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:27:52.924746       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:28:49.895612       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:28:52.949416       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:29:49.926843       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:29:52.974053       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:30:49.952749       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:30:53.001307       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:31:49.975917       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:31:53.032349       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:32:50.001867       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:32:53.064237       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:33:50.025220       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:33:53.087933       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:34:50.061598       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:34:53.111313       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:35:50.083789       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:35:53.134967       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:36:50.115781       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:36:53.159079       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:37:50.139935       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:37:53.182281       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:38:50.164124       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:38:53.208399       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:39:50.191067       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:39:53.236241       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:40:50.216964       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:40:53.265237       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:41:50.242239       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:41:53.299228       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
I0125 17:42:50.267976       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-md-0-22mqg"
I0125 17:42:53.325431       1 nfd-master.go:423] received labeling request for node "capz-e2e-0fpp5m-gpu-control-plane-x2w68"

Logs for pod gpu-operator-node-feature-discovery-worker-7zw7m:
I0125 17:15:07.001909       1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1
I0125 17:15:07.002111       1 nfd-worker.go:156] NodeName: 'capz-e2e-0fpp5m-gpu-control-plane-x2w68'
I0125 17:15:07.002623       1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed
I0125 17:15:07.002754       1 nfd-worker.go:461] worker (re-)configuration successfully completed
I0125 17:15:07.003053       1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ...
I0125 17:15:07.003152       1 component.go:36] [core]parsed scheme: ""
I0125 17:15:07.003182       1 component.go:36] [core]scheme "" not registered, fallback to default scheme
I0125 17:15:07.003291       1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080  <nil> 0 <nil>}] <nil> <nil>}
I0125 17:15:07.003373       1 component.go:36] [core]ClientConn switching balancer to "pick_first"
I0125 17:15:07.003419       1 component.go:36] [core]Channel switches to new LB policy "pick_first"
I0125 17:15:07.003512       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 17:15:07.003616       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 17:15:07.005637       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0125 17:15:07.006803       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
I0125 17:15:07.006918       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:07.007008       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:08.008456       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 17:15:08.008490       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 17:15:08.008688       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0125 17:15:08.009619       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
I0125 17:15:08.009642       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:08.009669       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:09.818013       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 17:15:09.818038       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 17:15:09.818126       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0125 17:15:09.824702       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
I0125 17:15:09.824725       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:09.824744       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:12.412642       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 17:15:12.412945       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 17:15:12.413202       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0125 17:15:12.417871       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
I0125 17:15:12.417891       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:12.418041       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:17.069650       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 17:15:17.069684       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 17:15:17.069873       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0125 17:15:17.073774       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
I0125 17:15:17.073792       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:17.073805       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:22.958236       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 17:15:22.958269       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 17:15:22.958875       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0125 17:15:22.959342       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
I0125 17:15:22.959353       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:22.959372       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:35.377662       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 17:15:35.377694       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 17:15:35.377726       1 component.go:36] [core]Channel Connectivity change to CONNECTING
W0125 17:15:35.378643       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
I0125 17:15:35.378654       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:35.378687       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
I0125 17:15:52.499791       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 17:15:52.499888       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 17:15:52.500037       1 component.go:36] [core]Channel Connectivity change to CONNECTING
I0125 17:15:52.504651       1 component.go:36] [core]Subchannel Connectivity change to READY
I0125 17:15:52.504674       1 component.go:36] [core]Channel Connectivity change to READY
I0125 17:15:52.520121       1 nfd-worker.go:472] starting feature discovery...
I0125 17:15:52.520399       1 nfd-worker.go:484] feature discovery completed
I0125 17:15:52.520415       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:16:52.580437       1 nfd-worker.go:472] starting feature discovery...
I0125 17:16:52.580628       1 nfd-worker.go:484] feature discovery completed
I0125 17:16:52.580645       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:17:52.617775       1 nfd-worker.go:472] starting feature discovery...
I0125 17:17:52.618012       1 nfd-worker.go:484] feature discovery completed
I0125 17:17:52.618028       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:18:52.653591       1 nfd-worker.go:472] starting feature discovery...
I0125 17:18:52.653738       1 nfd-worker.go:484] feature discovery completed
I0125 17:18:52.653800       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:19:52.683699       1 nfd-worker.go:472] starting feature discovery...
I0125 17:19:52.683978       1 nfd-worker.go:484] feature discovery completed
I0125 17:19:52.683994       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:20:52.709643       1 nfd-worker.go:472] starting feature discovery...
I0125 17:20:52.709907       1 nfd-worker.go:484] feature discovery completed
I0125 17:20:52.709954       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:21:52.733082       1 nfd-worker.go:472] starting feature discovery...
I0125 17:21:52.733343       1 nfd-worker.go:484] feature discovery completed
I0125 17:21:52.733358       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:22:52.761994       1 nfd-worker.go:472] starting feature discovery...
I0125 17:22:52.762193       1 nfd-worker.go:484] feature discovery completed
I0125 17:22:52.762208       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:23:52.799164       1 nfd-worker.go:472] starting feature discovery...
I0125 17:23:52.799435       1 nfd-worker.go:484] feature discovery completed
I0125 17:23:52.799451       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:24:52.822324       1 nfd-worker.go:472] starting feature discovery...
I0125 17:24:52.822505       1 nfd-worker.go:484] feature discovery completed
I0125 17:24:52.822614       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:25:52.848076       1 nfd-worker.go:472] starting feature discovery...
I0125 17:25:52.848348       1 nfd-worker.go:484] feature discovery completed
I0125 17:25:52.848364       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:26:52.874543       1 nfd-worker.go:472] starting feature discovery...
I0125 17:26:52.874827       1 nfd-worker.go:484] feature discovery completed
I0125 17:26:52.874843       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:27:52.908745       1 nfd-worker.go:472] starting feature discovery...
I0125 17:27:52.909029       1 nfd-worker.go:484] feature discovery completed
I0125 17:27:52.909044       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:28:52.938471       1 nfd-worker.go:472] starting feature discovery...
I0125 17:28:52.938659       1 nfd-worker.go:484] feature discovery completed
I0125 17:28:52.938677       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:29:52.962235       1 nfd-worker.go:472] starting feature discovery...
I0125 17:29:52.962554       1 nfd-worker.go:484] feature discovery completed
I0125 17:29:52.962573       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:30:52.989658       1 nfd-worker.go:472] starting feature discovery...
I0125 17:30:52.989834       1 nfd-worker.go:484] feature discovery completed
I0125 17:30:52.989847       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:31:53.016226       1 nfd-worker.go:472] starting feature discovery...
I0125 17:31:53.016452       1 nfd-worker.go:484] feature discovery completed
I0125 17:31:53.016468       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:32:53.046394       1 nfd-worker.go:472] starting feature discovery...
I0125 17:32:53.046689       1 nfd-worker.go:484] feature discovery completed
I0125 17:32:53.046705       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:33:53.076785       1 nfd-worker.go:472] starting feature discovery...
I0125 17:33:53.077056       1 nfd-worker.go:484] feature discovery completed
I0125 17:33:53.077072       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:34:53.099824       1 nfd-worker.go:472] starting feature discovery...
I0125 17:34:53.100100       1 nfd-worker.go:484] feature discovery completed
I0125 17:34:53.100117       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:35:53.124317       1 nfd-worker.go:472] starting feature discovery...
I0125 17:35:53.124505       1 nfd-worker.go:484] feature discovery completed
I0125 17:35:53.124520       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:36:53.147332       1 nfd-worker.go:472] starting feature discovery...
I0125 17:36:53.147636       1 nfd-worker.go:484] feature discovery completed
I0125 17:36:53.147651       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:37:53.171698       1 nfd-worker.go:472] starting feature discovery...
I0125 17:37:53.171955       1 nfd-worker.go:484] feature discovery completed
I0125 17:37:53.171969       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:38:53.193696       1 nfd-worker.go:472] starting feature discovery...
I0125 17:38:53.193956       1 nfd-worker.go:484] feature discovery completed
I0125 17:38:53.193995       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:39:53.222880       1 nfd-worker.go:472] starting feature discovery...
I0125 17:39:53.223031       1 nfd-worker.go:484] feature discovery completed
I0125 17:39:53.223042       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:40:53.254314       1 nfd-worker.go:472] starting feature discovery...
I0125 17:40:53.254570       1 nfd-worker.go:484] feature discovery completed
I0125 17:40:53.254585       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:41:53.286226       1 nfd-worker.go:472] starting feature discovery...
I0125 17:41:53.286382       1 nfd-worker.go:484] feature discovery completed
I0125 17:41:53.286391       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:42:53.315149       1 nfd-worker.go:472] starting feature discovery...
I0125 17:42:53.315331       1 nfd-worker.go:484] feature discovery completed
I0125 17:42:53.315368       1 nfd-worker.go:565] sending labeling request to nfd-master

Logs for pod gpu-operator-node-feature-discovery-worker-dbl82:
I0125 17:16:49.528648       1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1
I0125 17:16:49.528715       1 nfd-worker.go:156] NodeName: 'capz-e2e-0fpp5m-gpu-md-0-22mqg'
I0125 17:16:49.529075       1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed
I0125 17:16:49.529159       1 nfd-worker.go:461] worker (re-)configuration successfully completed
I0125 17:16:49.529204       1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ...
I0125 17:16:49.529231       1 component.go:36] [core]parsed scheme: ""
I0125 17:16:49.529238       1 component.go:36] [core]scheme "" not registered, fallback to default scheme
I0125 17:16:49.529253       1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080  <nil> 0 <nil>}] <nil> <nil>}
I0125 17:16:49.529263       1 component.go:36] [core]ClientConn switching balancer to "pick_first"
I0125 17:16:49.529267       1 component.go:36] [core]Channel switches to new LB policy "pick_first"
I0125 17:16:49.529289       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
I0125 17:16:49.529312       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
I0125 17:16:49.529401       1 component.go:36] [core]Channel Connectivity change to CONNECTING
I0125 17:16:49.536027       1 component.go:36] [core]Subchannel Connectivity change to READY
I0125 17:16:49.536059       1 component.go:36] [core]Channel Connectivity change to READY
I0125 17:16:49.544128       1 nfd-worker.go:472] starting feature discovery...
I0125 17:16:49.544233       1 nfd-worker.go:484] feature discovery completed
I0125 17:16:49.544244       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:17:49.586501       1 nfd-worker.go:472] starting feature discovery...
I0125 17:17:49.586613       1 nfd-worker.go:484] feature discovery completed
I0125 17:17:49.586626       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:18:49.624915       1 nfd-worker.go:472] starting feature discovery...
I0125 17:18:49.625027       1 nfd-worker.go:484] feature discovery completed
I0125 17:18:49.625040       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:19:49.650252       1 nfd-worker.go:472] starting feature discovery...
I0125 17:19:49.650394       1 nfd-worker.go:484] feature discovery completed
I0125 17:19:49.650403       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:20:49.675283       1 nfd-worker.go:472] starting feature discovery...
I0125 17:20:49.675396       1 nfd-worker.go:484] feature discovery completed
I0125 17:20:49.675409       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:21:49.699129       1 nfd-worker.go:472] starting feature discovery...
I0125 17:21:49.699240       1 nfd-worker.go:484] feature discovery completed
I0125 17:21:49.699252       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:22:49.729703       1 nfd-worker.go:472] starting feature discovery...
I0125 17:22:49.729816       1 nfd-worker.go:484] feature discovery completed
I0125 17:22:49.729829       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:23:49.758217       1 nfd-worker.go:472] starting feature discovery...
I0125 17:23:49.758426       1 nfd-worker.go:484] feature discovery completed
I0125 17:23:49.758551       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:24:49.782373       1 nfd-worker.go:472] starting feature discovery...
I0125 17:24:49.782482       1 nfd-worker.go:484] feature discovery completed
I0125 17:24:49.782494       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:25:49.805943       1 nfd-worker.go:472] starting feature discovery...
I0125 17:25:49.806054       1 nfd-worker.go:484] feature discovery completed
I0125 17:25:49.806066       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:26:49.831568       1 nfd-worker.go:472] starting feature discovery...
I0125 17:26:49.831681       1 nfd-worker.go:484] feature discovery completed
I0125 17:26:49.831694       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:27:49.861112       1 nfd-worker.go:472] starting feature discovery...
I0125 17:27:49.861224       1 nfd-worker.go:484] feature discovery completed
I0125 17:27:49.861237       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:28:49.884443       1 nfd-worker.go:472] starting feature discovery...
I0125 17:28:49.884557       1 nfd-worker.go:484] feature discovery completed
I0125 17:28:49.884570       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:29:49.915986       1 nfd-worker.go:472] starting feature discovery...
I0125 17:29:49.916150       1 nfd-worker.go:484] feature discovery completed
I0125 17:29:49.916164       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:30:49.941698       1 nfd-worker.go:472] starting feature discovery...
I0125 17:30:49.941811       1 nfd-worker.go:484] feature discovery completed
I0125 17:30:49.941824       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:31:49.966564       1 nfd-worker.go:472] starting feature discovery...
I0125 17:31:49.966676       1 nfd-worker.go:484] feature discovery completed
I0125 17:31:49.966689       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:32:49.990453       1 nfd-worker.go:472] starting feature discovery...
I0125 17:32:49.990563       1 nfd-worker.go:484] feature discovery completed
I0125 17:32:49.990576       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:33:50.015640       1 nfd-worker.go:472] starting feature discovery...
I0125 17:33:50.015754       1 nfd-worker.go:484] feature discovery completed
I0125 17:33:50.015766       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:34:50.040607       1 nfd-worker.go:472] starting feature discovery...
I0125 17:34:50.040718       1 nfd-worker.go:484] feature discovery completed
I0125 17:34:50.040730       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:35:50.074603       1 nfd-worker.go:472] starting feature discovery...
I0125 17:35:50.074718       1 nfd-worker.go:484] feature discovery completed
I0125 17:35:50.074731       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:36:50.104929       1 nfd-worker.go:472] starting feature discovery...
I0125 17:36:50.105037       1 nfd-worker.go:484] feature discovery completed
I0125 17:36:50.105049       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:37:50.130566       1 nfd-worker.go:472] starting feature discovery...
I0125 17:37:50.130676       1 nfd-worker.go:484] feature discovery completed
I0125 17:37:50.130689       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:38:50.154352       1 nfd-worker.go:472] starting feature discovery...
I0125 17:38:50.154480       1 nfd-worker.go:484] feature discovery completed
I0125 17:38:50.154493       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:39:50.179681       1 nfd-worker.go:472] starting feature discovery...
I0125 17:39:50.179810       1 nfd-worker.go:484] feature discovery completed
I0125 17:39:50.179824       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:40:50.207602       1 nfd-worker.go:472] starting feature discovery...
I0125 17:40:50.207715       1 nfd-worker.go:484] feature discovery completed
I0125 17:40:50.207728       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:41:50.232231       1 nfd-worker.go:472] starting feature discovery...
I0125 17:41:50.232343       1 nfd-worker.go:484] feature discovery completed
I0125 17:41:50.232356       1 nfd-worker.go:565] sending labeling request to nfd-master
I0125 17:42:50.258027       1 nfd-worker.go:472] starting feature discovery...
I0125 17:42:50.258140       1 nfd-worker.go:484] feature discovery completed
I0125 17:42:50.258153       1 nfd-worker.go:565] sending labeling request to nfd-master

Expected
    <bool>: false
to be true
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/25/23 17:43:01.19

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Find gpu-operator-node-feature-discovery-master-77bc558fdc-wpfhl mentions in log files | View test history on testgrid


Show 26 Passed Tests

Show 17 Skipped Tests

Error lines from build-log.txt

... skipping 626 lines ...
------------------------------
• [920.521 seconds]
Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:573

  Captured StdOut/StdErr Output >>
  2023/01/25 17:09:59 failed trying to get namespace (capz-e2e-0mdkr1):namespaces "capz-e2e-0mdkr1" not found
  cluster.cluster.x-k8s.io/capz-e2e-0mdkr1-flex created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-0mdkr1-flex created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-0mdkr1-flex-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-0mdkr1-flex-control-plane created
  machinepool.cluster.x-k8s.io/capz-e2e-0mdkr1-flex-mp-0 created
  azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-0mdkr1-flex-mp-0 created
... skipping 2 lines ...

  felixconfiguration.crd.projectcalico.org/default configured

  W0125 17:18:40.038445   37434 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  2023/01/25 17:19:11 [DEBUG] GET http://20.75.160.136
  W0125 17:19:48.183108   37434 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  Failed to get logs for MachinePool capz-e2e-0mdkr1-flex-mp-0, Cluster capz-e2e-0mdkr1/capz-e2e-0mdkr1-flex: Unable to collect VMSS Boot Diagnostic logs: failed to parse resource id: parsing failed for /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-0mdkr1-flex/providers/Microsoft.Compute. Invalid resource Id format
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Wed, 25 Jan 2023 17:09:59 UTC on Ginkgo node 5 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating namespace "capz-e2e-0mdkr1" for hosting the cluster @ 01/25/23 17:09:59.949
  Jan 25 17:09:59.949: INFO: starting to create namespace for hosting the "capz-e2e-0mdkr1" test spec
... skipping 229 lines ...
------------------------------
• [1015.700 seconds]
Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:321

  Captured StdOut/StdErr Output >>
  2023/01/25 17:09:59 failed trying to get namespace (capz-e2e-bv735k):namespaces "capz-e2e-bv735k" not found
  cluster.cluster.x-k8s.io/capz-e2e-bv735k-flatcar created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-bv735k-flatcar created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-bv735k-flatcar-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-bv735k-flatcar-control-plane created
  machinedeployment.cluster.x-k8s.io/capz-e2e-bv735k-flatcar-md-0 created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-bv735k-flatcar-md-0 created
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-bv735k-flatcar-md-0 created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine capz-e2e-bv735k-flatcar-control-plane-b8jhg, Cluster capz-e2e-bv735k/capz-e2e-bv735k-flatcar: [dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:55268->20.242.244.157:22: read: connection reset by peer, dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:55272->20.242.244.157:22: read: connection reset by peer]
  Failed to get logs for Machine capz-e2e-bv735k-flatcar-md-0-55c94f8d65-drtwh, Cluster capz-e2e-bv735k/capz-e2e-bv735k-flatcar: [dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:40850->20.242.244.157:22: read: connection reset by peer, dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:40840->20.242.244.157:22: read: connection reset by peer, dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:40852->20.242.244.157:22: read: connection reset by peer, dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:40846->20.242.244.157:22: read: connection reset by peer, dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:40854->20.242.244.157:22: read: connection reset by peer, dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:40856->20.242.244.157:22: read: connection reset by peer, dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:40844->20.242.244.157:22: read: connection reset by peer, dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:40842->20.242.244.157:22: read: connection reset by peer, dialing public load balancer at capz-e2e-bv735k-flatcar-bc930315.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.134.165:40848->20.242.244.157:22: read: connection reset by peer]
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Wed, 25 Jan 2023 17:09:59 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating namespace "capz-e2e-bv735k" for hosting the cluster @ 01/25/23 17:09:59.946
  Jan 25 17:09:59.946: INFO: starting to create namespace for hosting the "capz-e2e-bv735k" test spec
... skipping 157 lines ...
------------------------------
• [1073.375 seconds]
Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:637

  Captured StdOut/StdErr Output >>
  2023/01/25 17:09:59 failed trying to get namespace (capz-e2e-mm01nx):namespaces "capz-e2e-mm01nx" not found
  cluster.cluster.x-k8s.io/capz-e2e-mm01nx-oot created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-mm01nx-oot created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-mm01nx-oot-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-mm01nx-oot-control-plane created
  machinedeployment.cluster.x-k8s.io/capz-e2e-mm01nx-oot-md-0 created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-mm01nx-oot-md-0 created
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-mm01nx-oot-md-0 created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created

  felixconfiguration.crd.projectcalico.org/default configured

  W0125 17:18:45.899407   37463 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  2023/01/25 17:20:06 [DEBUG] GET http://20.75.161.133
  2023/01/25 17:20:36 [ERR] GET http://20.75.161.133 request failed: Get "http://20.75.161.133": dial tcp 20.75.161.133:80: i/o timeout
  2023/01/25 17:20:36 [DEBUG] GET http://20.75.161.133: retrying in 1s (4 left)
  W0125 17:21:21.034645   37463 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Wed, 25 Jan 2023 17:09:59 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
... skipping 275 lines ...
------------------------------
• [1191.838 seconds]
Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:906

  Captured StdOut/StdErr Output >>
  2023/01/25 17:09:59 failed trying to get namespace (capz-e2e-qag6dm):namespaces "capz-e2e-qag6dm" not found
  clusterclass.cluster.x-k8s.io/ci-default created
  kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/ci-default-kubeadm-control-plane created
  azureclustertemplate.infrastructure.cluster.x-k8s.io/ci-default-azure-cluster created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-control-plane created
  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker created
... skipping 5 lines ...
  clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created
  configmap/cni-capz-e2e-qag6dm-cc-calico-windows created
  configmap/csi-proxy-addon created

  felixconfiguration.crd.projectcalico.org/default configured

  Failed to get logs for Machine capz-e2e-qag6dm-cc-md-0-2z6pq-5588bccf88-p9b4v, Cluster capz-e2e-qag6dm/capz-e2e-qag6dm-cc: dialing public load balancer at capz-e2e-qag6dm-cc-25757b91.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  Failed to get logs for Machine capz-e2e-qag6dm-cc-md-win-bcg57-6f958894c8-jcgld, Cluster capz-e2e-qag6dm/capz-e2e-qag6dm-cc: dialing public load balancer at capz-e2e-qag6dm-cc-25757b91.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  Failed to get logs for Machine capz-e2e-qag6dm-cc-xv5pl-wkjq8, Cluster capz-e2e-qag6dm/capz-e2e-qag6dm-cc: dialing public load balancer at capz-e2e-qag6dm-cc-25757b91.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  << Captured StdOut/StdErr Output

  Timeline >>
  INFO: "" started at Wed, 25 Jan 2023 17:09:59 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  STEP: Creating namespace "capz-e2e-qag6dm" for hosting the cluster @ 01/25/23 17:09:59.961
  Jan 25 17:09:59.961: INFO: starting to create namespace for hosting the "capz-e2e-qag6dm" test spec
... skipping 186 lines ...
  Jan 25 17:24:12.008: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-win-mpnqz, container node-driver-registrar
  Jan 25 17:24:12.008: INFO: Collecting events for Pod kube-system/kube-proxy-windows-z9cmt
  Jan 25 17:24:12.057: INFO: Fetching kube-system pod logs took 563.794168ms
  Jan 25 17:24:12.057: INFO: Dumping workload cluster capz-e2e-qag6dm/capz-e2e-qag6dm-cc Azure activity log
  Jan 25 17:24:12.057: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-rmvpp, container tigera-operator
  Jan 25 17:24:12.058: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-rmvpp
  Jan 25 17:24:12.081: INFO: Error fetching activity logs for cluster capz-e2e-qag6dm-cc in namespace capz-e2e-qag6dm.  Not able to find the AzureManagedControlPlane on the management cluster: azuremanagedcontrolplanes.infrastructure.cluster.x-k8s.io "capz-e2e-qag6dm-cc" not found
  Jan 25 17:24:12.081: INFO: Fetching activity logs took 24.235402ms
  Jan 25 17:24:12.081: INFO: Dumping all the Cluster API resources in the "capz-e2e-qag6dm" namespace
  Jan 25 17:24:12.497: INFO: Deleting all clusters in the capz-e2e-qag6dm namespace
  STEP: Deleting cluster capz-e2e-qag6dm-cc @ 01/25/23 17:24:12.514
  INFO: Waiting for the Cluster capz-e2e-qag6dm/capz-e2e-qag6dm-cc to be deleted
  STEP: Waiting for cluster capz-e2e-qag6dm-cc to be deleted @ 01/25/23 17:24:12.528
... skipping 10 lines ...
------------------------------
• [1293.738 seconds]
Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:830

  Captured StdOut/StdErr Output >>
  2023/01/25 17:09:59 failed trying to get namespace (capz-e2e-6r3f4d):namespaces "capz-e2e-6r3f4d" not found
  cluster.cluster.x-k8s.io/capz-e2e-6r3f4d-dual-stack created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-6r3f4d-dual-stack created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-6r3f4d-dual-stack-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-6r3f4d-dual-stack-control-plane created
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created
  machinedeployment.cluster.x-k8s.io/capz-e2e-6r3f4d-dual-stack-md-0 created
... skipping 325 lines ...
  << Timeline
------------------------------
[SynchronizedAfterSuite] PASSED [0.000 seconds]
[SynchronizedAfterSuite] 
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116
------------------------------
• [FAILED] [2428.060 seconds]
Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] [It] with a single control plane node and 1 node
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506

  Captured StdOut/StdErr Output >>
  2023/01/25 17:09:59 failed trying to get namespace (capz-e2e-0fpp5m):namespaces "capz-e2e-0fpp5m" not found
  cluster.cluster.x-k8s.io/capz-e2e-0fpp5m-gpu serverside-applied
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-0fpp5m-gpu serverside-applied
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-0fpp5m-gpu-control-plane serverside-applied
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-0fpp5m-gpu-control-plane serverside-applied
  azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp serverside-applied
  machinedeployment.cluster.x-k8s.io/capz-e2e-0fpp5m-gpu-md-0 serverside-applied
... skipping 109 lines ...
  STEP: Verifying specified VM extensions are created on Azure @ 01/25/23 17:18:00.563
  STEP: Retrieving all machine pools from the machine template spec @ 01/25/23 17:18:00.991
  Jan 25 17:18:00.991: INFO: Listing machine pools in namespace capz-e2e-0fpp5m with label cluster.x-k8s.io/cluster-name=capz-e2e-0fpp5m-gpu
  STEP: Running a GPU-based calculation @ 01/25/23 17:18:00.995
  STEP: creating a Kubernetes client to the workload cluster @ 01/25/23 17:18:00.995
  STEP: Waiting for a node to have an "nvidia.com/gpu" allocatable resource @ 01/25/23 17:18:01.012
  [FAILED] in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/25/23 17:43:01.19
  Jan 25 17:43:01.190: INFO: FAILED!
  Jan 25 17:43:01.190: INFO: Cleaning up after "Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node" spec
  STEP: Dumping logs from the "capz-e2e-0fpp5m-gpu" workload cluster @ 01/25/23 17:43:01.19
  Jan 25 17:43:01.190: INFO: Dumping workload cluster capz-e2e-0fpp5m/capz-e2e-0fpp5m-gpu logs
  Jan 25 17:43:01.240: INFO: Collecting logs for Linux node capz-e2e-0fpp5m-gpu-control-plane-x2w68 in cluster capz-e2e-0fpp5m-gpu in namespace capz-e2e-0fpp5m

  Jan 25 17:43:18.829: INFO: Collecting boot logs for AzureMachine capz-e2e-0fpp5m-gpu-control-plane-x2w68
... skipping 74 lines ...
  INFO: Deleting namespace capz-e2e-0fpp5m
  Jan 25 17:49:13.151: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster"
  STEP: Redacting sensitive information from logs @ 01/25/23 17:49:13.816
  INFO: "with a single control plane node and 1 node" started at Wed, 25 Jan 2023 17:50:28 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml
  << Timeline

  [FAILED] Timed out after 1500.000s.

  Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-wpfhl:
  I0125 17:15:40.936656       1 nfd-master.go:170] Node Feature Discovery Master v0.10.1
  I0125 17:15:40.937322       1 nfd-master.go:174] NodeName: "capz-e2e-0fpp5m-gpu-control-plane-x2w68"
  I0125 17:15:40.937638       1 nfd-master.go:185] starting nfd LabelRule controller
  I0125 17:15:41.029254       1 nfd-master.go:226] gRPC server serving on port: 8080
... skipping 64 lines ...
  I0125 17:15:07.003291       1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080  <nil> 0 <nil>}] <nil> <nil>}
  I0125 17:15:07.003373       1 component.go:36] [core]ClientConn switching balancer to "pick_first"
  I0125 17:15:07.003419       1 component.go:36] [core]Channel switches to new LB policy "pick_first"
  I0125 17:15:07.003512       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0125 17:15:07.003616       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0125 17:15:07.005637       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0125 17:15:07.006803       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
  I0125 17:15:07.006918       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:07.007008       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:08.008456       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0125 17:15:08.008490       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0125 17:15:08.008688       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0125 17:15:08.009619       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
  I0125 17:15:08.009642       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:08.009669       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:09.818013       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0125 17:15:09.818038       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0125 17:15:09.818126       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0125 17:15:09.824702       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
  I0125 17:15:09.824725       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:09.824744       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:12.412642       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0125 17:15:12.412945       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0125 17:15:12.413202       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0125 17:15:12.417871       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
  I0125 17:15:12.417891       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:12.418041       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:17.069650       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0125 17:15:17.069684       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0125 17:15:17.069873       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0125 17:15:17.073774       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
  I0125 17:15:17.073792       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:17.073805       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:22.958236       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0125 17:15:22.958269       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0125 17:15:22.958875       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0125 17:15:22.959342       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
  I0125 17:15:22.959353       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:22.959372       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:35.377662       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0125 17:15:35.377694       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0125 17:15:35.377726       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  W0125 17:15:35.378643       1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.108.185.235:8080: connect: connection refused". Reconnecting...
  I0125 17:15:35.378654       1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:35.378687       1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE
  I0125 17:15:52.499791       1 component.go:36] [core]Subchannel Connectivity change to CONNECTING
  I0125 17:15:52.499888       1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect
  I0125 17:15:52.500037       1 component.go:36] [core]Channel Connectivity change to CONNECTING
  I0125 17:15:52.504651       1 component.go:36] [core]Subchannel Connectivity change to READY
... skipping 200 lines ...
------------------------------
• [3738.429 seconds]
Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156

  Captured StdOut/StdErr Output >>
  2023/01/25 17:09:59 failed trying to get namespace (capz-e2e-ztovpr):namespaces "capz-e2e-ztovpr" not found
  cluster.cluster.x-k8s.io/capz-e2e-ztovpr-public-custom-vnet created
  azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-ztovpr-public-custom-vnet created
  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-ztovpr-public-custom-vnet-control-plane created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-ztovpr-public-custom-vnet-control-plane created
  machinedeployment.cluster.x-k8s.io/capz-e2e-ztovpr-public-custom-vnet-md-0 created
  azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-ztovpr-public-custom-vnet-md-0 created
... skipping 248 lines ...
  Jan 25 18:06:48.508: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-kmhtn
  Jan 25 18:06:48.508: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-ztovpr-public-custom-vnet-control-plane-xzztr, container kube-apiserver
  Jan 25 18:06:48.544: INFO: Fetching kube-system pod logs took 590.36919ms
  Jan 25 18:06:48.544: INFO: Dumping workload cluster capz-e2e-ztovpr/capz-e2e-ztovpr-public-custom-vnet Azure activity log
  Jan 25 18:06:48.544: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-ptd2p, container tigera-operator
  Jan 25 18:06:48.545: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-ptd2p
  Jan 25 18:06:58.327: INFO: Got error while iterating over activity logs for resource group capz-e2e-ztovpr-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure responding to next results request: StatusCode=404 -- Original Error: autorest/azure: error response cannot be parsed: {"<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">\r\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\r\n<head>\r\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=iso-8859-1\"/>\r\n<title>404 - File or directory not found.</title>\r\n<style type=\"text/css\">\r\n<!--\r\nbody{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;}\r\nfieldset{padding:0 15px 10px 15px;} \r\nh1{font-size:2.4em;margin:0;color:#FFF;}\r\nh2{font-si" '\x00' '\x00'} error: invalid character '<' looking for beginning of value
  Jan 25 18:06:58.327: INFO: Fetching activity logs took 9.782970703s
  Jan 25 18:06:58.327: INFO: Dumping all the Cluster API resources in the "capz-e2e-ztovpr" namespace
  Jan 25 18:06:58.720: INFO: Deleting all clusters in the capz-e2e-ztovpr namespace
  STEP: Deleting cluster capz-e2e-ztovpr-public-custom-vnet @ 01/25/23 18:06:58.745
  INFO: Waiting for the Cluster capz-e2e-ztovpr/capz-e2e-ztovpr-public-custom-vnet to be deleted
  STEP: Waiting for cluster capz-e2e-ztovpr-public-custom-vnet to be deleted @ 01/25/23 18:06:58.765
  INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-669bd95bbb-9h5t9, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-888fd85cd-j5dqf, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6f7b75f796-gwlhj, container manager: http2: client connection lost
  INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-687b6fd9bc-6ktfz, container manager: http2: client connection lost
  Jan 25 18:09:48.889: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec
  INFO: Deleting namespace capz-e2e-ztovpr
  Jan 25 18:09:48.911: INFO: Running additional cleanup for the "create-workload-cluster" test spec
  Jan 25 18:09:48.911: INFO: deleting an existing virtual network "custom-vnet"
  Jan 25 18:09:59.694: INFO: deleting an existing route table "node-routetable"
  Jan 25 18:10:02.343: INFO: deleting an existing network security group "node-nsg"
... skipping 16 lines ...
[ReportAfterSuite] PASSED [0.014 seconds]
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------

Summarizing 1 Failure:
  [FAIL] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] [It] with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80

Ran 7 of 24 Specs in 3914.616 seconds
FAIL! -- 6 Passed | 1 Failed | 0 Pending | 17 Skipped

You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:426
... skipping 85 lines ...
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:285
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:426

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.7.0

--- FAIL: TestE2E (2602.81s)
FAIL


Ginkgo ran 1 suite in 1h8m37.8150205s

Test Suite Failed
make[1]: *** [Makefile:654: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:663: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...