Recent runs || View in Spyglass
PR | jackfrancis: Update default k8s version to v1.25 for testing |
Result | FAILURE |
Tests | 1 failed / 26 succeeded |
Started | |
Elapsed | 1h13m |
Revision | aa4b89f70338b5bf172b792cbe9a26a0f73595d6 |
Refs |
3088 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
[FAILED] Timed out after 1500.001s. Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-zd7rh: I0125 01:54:02.020848 1 nfd-master.go:170] Node Feature Discovery Master v0.10.1 I0125 01:54:02.021405 1 nfd-master.go:174] NodeName: "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:54:02.021418 1 nfd-master.go:185] starting nfd LabelRule controller I0125 01:54:02.049822 1 nfd-master.go:226] gRPC server serving on port: 8080 I0125 01:54:13.799770 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:54:52.274412 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 01:55:13.863026 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:55:52.341339 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 01:56:13.909377 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:56:52.518845 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 01:57:13.941468 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:57:52.542895 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 01:58:13.969465 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:58:52.569786 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 01:59:13.996833 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:59:52.593752 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:00:14.027415 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:00:52.618449 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:01:14.062015 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:01:52.647397 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:02:14.092478 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:02:52.671389 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:03:14.118042 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:03:52.697770 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:04:14.145041 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:04:52.721953 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:05:14.179417 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:05:52.751522 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:06:14.205104 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:06:52.774416 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:07:14.231272 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:07:52.799744 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:08:14.258334 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:08:52.827811 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:09:14.287055 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:09:52.862207 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:10:14.313237 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:10:52.885862 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:11:14.342255 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:11:52.910149 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:12:14.369921 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:12:52.935491 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:13:14.395452 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:13:52.963304 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:14:14.421117 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:14:52.987600 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:15:14.449399 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:15:53.010952 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:16:14.474115 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:16:53.036015 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:17:14.501369 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:17:53.060327 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:18:14.533389 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:18:53.083971 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:19:14.558948 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:19:53.107299 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:20:14.583191 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:20:53.132559 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" Logs for pod gpu-operator-node-feature-discovery-worker-bbnrb: I0125 01:54:52.247172 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 01:54:52.247238 1 nfd-worker.go:156] NodeName: 'capz-e2e-plb7mq-gpu-md-0-fngb8' I0125 01:54:52.247645 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 01:54:52.247718 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 01:54:52.247763 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 01:54:52.247824 1 component.go:36] [core]parsed scheme: "" I0125 01:54:52.247846 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 01:54:52.247879 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 01:54:52.247905 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 01:54:52.247922 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 01:54:52.247952 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 01:54:52.247994 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 01:54:52.248364 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 01:54:52.251072 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 01:54:52.251092 1 component.go:36] [core]Channel Connectivity change to READY I0125 01:54:52.258965 1 nfd-worker.go:472] starting feature discovery... I0125 01:54:52.259072 1 nfd-worker.go:484] feature discovery completed I0125 01:54:52.259082 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:55:52.329939 1 nfd-worker.go:472] starting feature discovery... I0125 01:55:52.330050 1 nfd-worker.go:484] feature discovery completed I0125 01:55:52.330063 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:56:52.508027 1 nfd-worker.go:472] starting feature discovery... I0125 01:56:52.508138 1 nfd-worker.go:484] feature discovery completed I0125 01:56:52.508149 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:57:52.532066 1 nfd-worker.go:472] starting feature discovery... I0125 01:57:52.532206 1 nfd-worker.go:484] feature discovery completed I0125 01:57:52.532219 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:58:52.556106 1 nfd-worker.go:472] starting feature discovery... I0125 01:58:52.556219 1 nfd-worker.go:484] feature discovery completed I0125 01:58:52.556231 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:59:52.582365 1 nfd-worker.go:472] starting feature discovery... I0125 01:59:52.582472 1 nfd-worker.go:484] feature discovery completed I0125 01:59:52.582484 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:00:52.606258 1 nfd-worker.go:472] starting feature discovery... I0125 02:00:52.606368 1 nfd-worker.go:484] feature discovery completed I0125 02:00:52.606382 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:01:52.631045 1 nfd-worker.go:472] starting feature discovery... I0125 02:01:52.631158 1 nfd-worker.go:484] feature discovery completed I0125 02:01:52.631170 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:02:52.659479 1 nfd-worker.go:472] starting feature discovery... I0125 02:02:52.659596 1 nfd-worker.go:484] feature discovery completed I0125 02:02:52.659624 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:03:52.684727 1 nfd-worker.go:472] starting feature discovery... I0125 02:03:52.684835 1 nfd-worker.go:484] feature discovery completed I0125 02:03:52.684849 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:04:52.710223 1 nfd-worker.go:472] starting feature discovery... I0125 02:04:52.710333 1 nfd-worker.go:484] feature discovery completed I0125 02:04:52.710345 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:05:52.741106 1 nfd-worker.go:472] starting feature discovery... I0125 02:05:52.741385 1 nfd-worker.go:484] feature discovery completed I0125 02:05:52.741400 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:06:52.763663 1 nfd-worker.go:472] starting feature discovery... I0125 02:06:52.763773 1 nfd-worker.go:484] feature discovery completed I0125 02:06:52.763785 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:07:52.788477 1 nfd-worker.go:472] starting feature discovery... I0125 02:07:52.788590 1 nfd-worker.go:484] feature discovery completed I0125 02:07:52.788601 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:08:52.816571 1 nfd-worker.go:472] starting feature discovery... I0125 02:08:52.816682 1 nfd-worker.go:484] feature discovery completed I0125 02:08:52.816694 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:09:52.851208 1 nfd-worker.go:472] starting feature discovery... I0125 02:09:52.851327 1 nfd-worker.go:484] feature discovery completed I0125 02:09:52.851341 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:10:52.874493 1 nfd-worker.go:472] starting feature discovery... I0125 02:10:52.874604 1 nfd-worker.go:484] feature discovery completed I0125 02:10:52.874616 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:11:52.898865 1 nfd-worker.go:472] starting feature discovery... I0125 02:11:52.898974 1 nfd-worker.go:484] feature discovery completed I0125 02:11:52.898986 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:12:52.924415 1 nfd-worker.go:472] starting feature discovery... I0125 02:12:52.924533 1 nfd-worker.go:484] feature discovery completed I0125 02:12:52.924545 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:13:52.949408 1 nfd-worker.go:472] starting feature discovery... I0125 02:13:52.949521 1 nfd-worker.go:484] feature discovery completed I0125 02:13:52.949533 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:14:52.975528 1 nfd-worker.go:472] starting feature discovery... I0125 02:14:52.975650 1 nfd-worker.go:484] feature discovery completed I0125 02:14:52.975663 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:15:52.999586 1 nfd-worker.go:472] starting feature discovery... I0125 02:15:52.999708 1 nfd-worker.go:484] feature discovery completed I0125 02:15:52.999720 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:16:53.024352 1 nfd-worker.go:472] starting feature discovery... I0125 02:16:53.024466 1 nfd-worker.go:484] feature discovery completed I0125 02:16:53.024478 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:17:53.048583 1 nfd-worker.go:472] starting feature discovery... I0125 02:17:53.048692 1 nfd-worker.go:484] feature discovery completed I0125 02:17:53.048704 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:18:53.073683 1 nfd-worker.go:472] starting feature discovery... I0125 02:18:53.073791 1 nfd-worker.go:484] feature discovery completed I0125 02:18:53.073803 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:19:53.096913 1 nfd-worker.go:472] starting feature discovery... I0125 02:19:53.097025 1 nfd-worker.go:484] feature discovery completed I0125 02:19:53.097037 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:20:53.120457 1 nfd-worker.go:472] starting feature discovery... I0125 02:20:53.120645 1 nfd-worker.go:484] feature discovery completed I0125 02:20:53.120662 1 nfd-worker.go:565] sending labeling request to nfd-master Logs for pod gpu-operator-node-feature-discovery-worker-n2jgl: I0125 01:54:13.715272 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 01:54:13.715358 1 nfd-worker.go:156] NodeName: 'capz-e2e-plb7mq-gpu-control-plane-fzzxp' I0125 01:54:13.716430 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 01:54:13.717771 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 01:54:13.718410 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 01:54:13.718528 1 component.go:36] [core]parsed scheme: "" I0125 01:54:13.718543 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 01:54:13.718622 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 01:54:13.718639 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 01:54:13.718643 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 01:54:13.718863 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 01:54:13.718967 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 01:54:13.719998 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 01:54:13.744678 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 01:54:13.744702 1 component.go:36] [core]Channel Connectivity change to READY I0125 01:54:13.772659 1 nfd-worker.go:472] starting feature discovery... I0125 01:54:13.783953 1 nfd-worker.go:484] feature discovery completed I0125 01:54:13.784103 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:55:13.845054 1 nfd-worker.go:472] starting feature discovery... I0125 01:55:13.847082 1 nfd-worker.go:484] feature discovery completed I0125 01:55:13.847177 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:56:13.895638 1 nfd-worker.go:472] starting feature discovery... I0125 01:56:13.895969 1 nfd-worker.go:484] feature discovery completed I0125 01:56:13.895992 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:57:13.928066 1 nfd-worker.go:472] starting feature discovery... I0125 01:57:13.928347 1 nfd-worker.go:484] feature discovery completed I0125 01:57:13.928363 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:58:13.958858 1 nfd-worker.go:472] starting feature discovery... I0125 01:58:13.959137 1 nfd-worker.go:484] feature discovery completed I0125 01:58:13.959152 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:59:13.985199 1 nfd-worker.go:472] starting feature discovery... I0125 01:59:13.985505 1 nfd-worker.go:484] feature discovery completed I0125 01:59:13.985521 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:00:14.014534 1 nfd-worker.go:472] starting feature discovery... I0125 02:00:14.014886 1 nfd-worker.go:484] feature discovery completed I0125 02:00:14.014902 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:01:14.047738 1 nfd-worker.go:472] starting feature discovery... I0125 02:01:14.047956 1 nfd-worker.go:484] feature discovery completed I0125 02:01:14.047972 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:02:14.082123 1 nfd-worker.go:472] starting feature discovery... I0125 02:02:14.082268 1 nfd-worker.go:484] feature discovery completed I0125 02:02:14.082282 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:03:14.105926 1 nfd-worker.go:472] starting feature discovery... I0125 02:03:14.106232 1 nfd-worker.go:484] feature discovery completed I0125 02:03:14.106247 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:04:14.134165 1 nfd-worker.go:472] starting feature discovery... I0125 02:04:14.134453 1 nfd-worker.go:484] feature discovery completed I0125 02:04:14.134468 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:05:14.162317 1 nfd-worker.go:472] starting feature discovery... I0125 02:05:14.162774 1 nfd-worker.go:484] feature discovery completed I0125 02:05:14.162828 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:06:14.191808 1 nfd-worker.go:472] starting feature discovery... I0125 02:06:14.192133 1 nfd-worker.go:484] feature discovery completed I0125 02:06:14.192146 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:07:14.220460 1 nfd-worker.go:472] starting feature discovery... I0125 02:07:14.220678 1 nfd-worker.go:484] feature discovery completed I0125 02:07:14.220693 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:08:14.245131 1 nfd-worker.go:472] starting feature discovery... I0125 02:08:14.245541 1 nfd-worker.go:484] feature discovery completed I0125 02:08:14.245774 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:09:14.274246 1 nfd-worker.go:472] starting feature discovery... I0125 02:09:14.274558 1 nfd-worker.go:484] feature discovery completed I0125 02:09:14.274573 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:10:14.302971 1 nfd-worker.go:472] starting feature discovery... I0125 02:10:14.303140 1 nfd-worker.go:484] feature discovery completed I0125 02:10:14.303153 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:11:14.326106 1 nfd-worker.go:472] starting feature discovery... I0125 02:11:14.326288 1 nfd-worker.go:484] feature discovery completed I0125 02:11:14.326338 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:12:14.359646 1 nfd-worker.go:472] starting feature discovery... I0125 02:12:14.359920 1 nfd-worker.go:484] feature discovery completed I0125 02:12:14.359936 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:13:14.383712 1 nfd-worker.go:472] starting feature discovery... I0125 02:13:14.383857 1 nfd-worker.go:484] feature discovery completed I0125 02:13:14.383871 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:14:14.410611 1 nfd-worker.go:472] starting feature discovery... I0125 02:14:14.410924 1 nfd-worker.go:484] feature discovery completed I0125 02:14:14.410941 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:15:14.435204 1 nfd-worker.go:472] starting feature discovery... I0125 02:15:14.435616 1 nfd-worker.go:484] feature discovery completed I0125 02:15:14.435725 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:16:14.463883 1 nfd-worker.go:472] starting feature discovery... I0125 02:16:14.464023 1 nfd-worker.go:484] feature discovery completed I0125 02:16:14.464032 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:17:14.488852 1 nfd-worker.go:472] starting feature discovery... I0125 02:17:14.489001 1 nfd-worker.go:484] feature discovery completed I0125 02:17:14.489015 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:18:14.521571 1 nfd-worker.go:472] starting feature discovery... I0125 02:18:14.521874 1 nfd-worker.go:484] feature discovery completed I0125 02:18:14.521889 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:19:14.548354 1 nfd-worker.go:472] starting feature discovery... I0125 02:19:14.548540 1 nfd-worker.go:484] feature discovery completed I0125 02:19:14.548555 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:20:14.571647 1 nfd-worker.go:472] starting feature discovery... I0125 02:20:14.572137 1 nfd-worker.go:484] feature discovery completed I0125 02:20:14.572158 1 nfd-worker.go:565] sending labeling request to nfd-master Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/25/23 02:21:08.678from junit.e2e_suite.1.xml
2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-plb7mq):namespaces "capz-e2e-plb7mq" not found cluster.cluster.x-k8s.io/capz-e2e-plb7mq-gpu serverside-applied azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-plb7mq-gpu serverside-applied kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-control-plane serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-control-plane serverside-applied azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp serverside-applied machinedeployment.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-md-0 serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-md-0 serverside-applied kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-md-0 serverside-applied clusterresourceset.addons.cluster.x-k8s.io/crs-gpu-operator serverside-applied configmap/nvidia-clusterpolicy-crd serverside-applied configmap/nvidia-gpu-operator-components serverside-applied felixconfiguration.crd.projectcalico.org/default created > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/25/23 01:46:28.222 INFO: "" started at Wed, 25 Jan 2023 01:46:28 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-plb7mq" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:46:28.222 Jan 25 01:46:28.222: INFO: starting to create namespace for hosting the "capz-e2e-plb7mq" test spec INFO: Creating namespace capz-e2e-plb7mq INFO: Creating event watcher for namespace "capz-e2e-plb7mq" Jan 25 01:46:28.439: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/25/23 01:46:28.581 (360ms) > Enter [It] with a single control plane node and 1 node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506 @ 01/25/23 01:46:28.581 INFO: Cluster name is capz-e2e-plb7mq-gpu INFO: Creating the workload cluster with name "capz-e2e-plb7mq-gpu" using the "nvidia-gpu" template (Kubernetes v1.25.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-plb7mq-gpu --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor nvidia-gpu INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/25/23 01:46:35.963 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/25/23 01:48:56.258 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/25/23 01:48:56.258 Jan 25 01:51:43.941: INFO: getting history for release projectcalico Jan 25 01:51:44.119: INFO: Release projectcalico does not exist, installing it Jan 25 01:51:45.526: INFO: creating 1 resource(s) Jan 25 01:51:46.141: INFO: creating 1 resource(s) Jan 25 01:51:46.285: INFO: creating 1 resource(s) Jan 25 01:51:46.420: INFO: creating 1 resource(s) Jan 25 01:51:46.555: INFO: creating 1 resource(s) Jan 25 01:51:46.722: INFO: creating 1 resource(s) Jan 25 01:51:47.024: INFO: creating 1 resource(s) Jan 25 01:51:47.226: INFO: creating 1 resource(s) Jan 25 01:51:47.353: INFO: creating 1 resource(s) Jan 25 01:51:47.517: INFO: creating 1 resource(s) Jan 25 01:51:47.670: INFO: creating 1 resource(s) Jan 25 01:51:47.818: INFO: creating 1 resource(s) Jan 25 01:51:47.977: INFO: creating 1 resource(s) Jan 25 01:51:48.124: INFO: creating 1 resource(s) Jan 25 01:51:48.277: INFO: creating 1 resource(s) Jan 25 01:51:48.432: INFO: creating 1 resource(s) Jan 25 01:51:48.614: INFO: creating 1 resource(s) Jan 25 01:51:48.832: INFO: creating 1 resource(s) Jan 25 01:51:49.031: INFO: creating 1 resource(s) Jan 25 01:51:49.305: INFO: creating 1 resource(s) Jan 25 01:51:49.960: INFO: creating 1 resource(s) Jan 25 01:51:50.096: INFO: Clearing discovery cache Jan 25 01:51:50.096: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 25 01:51:56.381: INFO: creating 1 resource(s) Jan 25 01:51:57.513: INFO: creating 6 resource(s) Jan 25 01:51:58.993: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/25/23 01:52:00.148 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:52:00.592 Jan 25 01:52:00.592: INFO: starting to wait for deployment to become available Jan 25 01:52:10.810: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.218162872s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/25/23 01:52:13.412 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:52:44.301 Jan 25 01:52:44.301: INFO: starting to wait for deployment to become available Jan 25 01:54:15.809: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m31.508016533s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:54:16.36 Jan 25 01:54:16.360: INFO: starting to wait for deployment to become available Jan 25 01:54:16.470: INFO: Deployment calico-system/calico-typha is now available, took 110.186522ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/25/23 01:54:16.47 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:54:47.359 Jan 25 01:54:47.359: INFO: starting to wait for deployment to become available Jan 25 01:55:18.068: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 30.70882975s INFO: Waiting for the first control plane machine managed by capz-e2e-plb7mq/capz-e2e-plb7mq-gpu-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/25/23 01:55:18.097 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 01/25/23 01:55:18.104 Jan 25 01:55:18.247: INFO: getting history for release azuredisk-csi-driver-oot Jan 25 01:55:18.356: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 25 01:55:22.656: INFO: creating 1 resource(s) Jan 25 01:55:23.010: INFO: creating 18 resource(s) Jan 25 01:55:23.881: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 01/25/23 01:55:23.904 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:55:24.351 Jan 25 01:55:24.351: INFO: starting to wait for deployment to become available Jan 25 01:56:04.904: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.552825207s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-plb7mq/capz-e2e-plb7mq-gpu-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 01/25/23 01:56:04.921 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 01/25/23 01:56:04.927 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/25/23 01:56:04.958 STEP: Checking all the machines controlled by capz-e2e-plb7mq-gpu-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 01:56:04.97 INFO: Waiting for the machine pools to be provisioned INFO: Calling PostMachinesProvisioned STEP: Waiting for all DaemonSet Pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/daemonsets.go:71 @ 01/25/23 01:56:05.114 STEP: waiting for 2 daemonset calico-system/calico-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:56:05.785 Jan 25 01:56:05.785: INFO: 2 daemonset calico-system/calico-node pods are running, took 109.84637ms STEP: waiting for 2 daemonset calico-system/csi-node-driver pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:56:05.895 Jan 25 01:56:05.895: INFO: 2 daemonset calico-system/csi-node-driver pods are running, took 108.740555ms STEP: waiting for 2 daemonset gpu-operator-resources/gpu-operator-node-feature-discovery-worker pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:56:06.005 Jan 25 01:56:06.005: INFO: 2 daemonset gpu-operator-resources/gpu-operator-node-feature-discovery-worker pods are running, took 108.86933ms STEP: waiting for 2 daemonset kube-system/csi-azuredisk-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:56:06.116 Jan 25 01:56:06.116: INFO: 2 daemonset kube-system/csi-azuredisk-node pods are running, took 109.128975ms STEP: daemonset kube-system/csi-azuredisk-node-win has no schedulable nodes, will skip - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:56:06.227 STEP: waiting for 2 daemonset kube-system/kube-proxy pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 01:56:06.336 Jan 25 01:56:06.336: INFO: 2 daemonset kube-system/kube-proxy pods are running, took 108.578233ms STEP: Verifying expected VM extensions are present on the node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:544 @ 01/25/23 01:56:06.336 STEP: creating a Kubernetes client to the workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:62 @ 01/25/23 01:56:06.336 STEP: Retrieving all machines from the machine template spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:68 @ 01/25/23 01:56:06.386 Jan 25 01:56:06.386: INFO: Listing machines in namespace capz-e2e-plb7mq with label cluster.x-k8s.io/cluster-name=capz-e2e-plb7mq-gpu STEP: Creating a mapping of machine IDs to array of expected VM extensions - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:83 @ 01/25/23 01:56:06.399 STEP: Creating a VM and VM extension client - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:91 @ 01/25/23 01:56:06.399 STEP: Verifying specified VM extensions are created on Azure - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:108 @ 01/25/23 01:56:07.304 STEP: Retrieving all machine pools from the machine template spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:123 @ 01/25/23 01:56:08.163 Jan 25 01:56:08.163: INFO: Listing machine pools in namespace capz-e2e-plb7mq with label cluster.x-k8s.io/cluster-name=capz-e2e-plb7mq-gpu END STEP: Verifying expected VM extensions are present on the node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:544 @ 01/25/23 01:56:08.167 (1.831s) STEP: Running a GPU-based calculation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:554 @ 01/25/23 01:56:08.167 STEP: creating a Kubernetes client to the workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:62 @ 01/25/23 01:56:08.167 STEP: Waiting for a node to have an "nvidia.com/gpu" allocatable resource - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:68 @ 01/25/23 01:56:08.194 END STEP: Running a GPU-based calculation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:554 @ 01/25/23 02:21:08.678 (25m0.511s) [FAILED] Timed out after 1500.001s. Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-zd7rh: I0125 01:54:02.020848 1 nfd-master.go:170] Node Feature Discovery Master v0.10.1 I0125 01:54:02.021405 1 nfd-master.go:174] NodeName: "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:54:02.021418 1 nfd-master.go:185] starting nfd LabelRule controller I0125 01:54:02.049822 1 nfd-master.go:226] gRPC server serving on port: 8080 I0125 01:54:13.799770 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:54:52.274412 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 01:55:13.863026 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:55:52.341339 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 01:56:13.909377 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:56:52.518845 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 01:57:13.941468 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:57:52.542895 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 01:58:13.969465 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:58:52.569786 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 01:59:13.996833 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:59:52.593752 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:00:14.027415 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:00:52.618449 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:01:14.062015 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:01:52.647397 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:02:14.092478 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:02:52.671389 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:03:14.118042 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:03:52.697770 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:04:14.145041 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:04:52.721953 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:05:14.179417 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:05:52.751522 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:06:14.205104 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:06:52.774416 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:07:14.231272 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:07:52.799744 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:08:14.258334 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:08:52.827811 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:09:14.287055 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:09:52.862207 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:10:14.313237 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:10:52.885862 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:11:14.342255 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:11:52.910149 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:12:14.369921 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:12:52.935491 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:13:14.395452 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:13:52.963304 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:14:14.421117 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:14:52.987600 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:15:14.449399 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:15:53.010952 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:16:14.474115 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:16:53.036015 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:17:14.501369 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:17:53.060327 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:18:14.533389 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:18:53.083971 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:19:14.558948 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:19:53.107299 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" I0125 02:20:14.583191 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 02:20:53.132559 1 nfd-master.go:423] received labeling request for node "capz-e2e-plb7mq-gpu-md-0-fngb8" Logs for pod gpu-operator-node-feature-discovery-worker-bbnrb: I0125 01:54:52.247172 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 01:54:52.247238 1 nfd-worker.go:156] NodeName: 'capz-e2e-plb7mq-gpu-md-0-fngb8' I0125 01:54:52.247645 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 01:54:52.247718 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 01:54:52.247763 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 01:54:52.247824 1 component.go:36] [core]parsed scheme: "" I0125 01:54:52.247846 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 01:54:52.247879 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 01:54:52.247905 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 01:54:52.247922 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 01:54:52.247952 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 01:54:52.247994 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 01:54:52.248364 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 01:54:52.251072 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 01:54:52.251092 1 component.go:36] [core]Channel Connectivity change to READY I0125 01:54:52.258965 1 nfd-worker.go:472] starting feature discovery... I0125 01:54:52.259072 1 nfd-worker.go:484] feature discovery completed I0125 01:54:52.259082 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:55:52.329939 1 nfd-worker.go:472] starting feature discovery... I0125 01:55:52.330050 1 nfd-worker.go:484] feature discovery completed I0125 01:55:52.330063 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:56:52.508027 1 nfd-worker.go:472] starting feature discovery... I0125 01:56:52.508138 1 nfd-worker.go:484] feature discovery completed I0125 01:56:52.508149 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:57:52.532066 1 nfd-worker.go:472] starting feature discovery... I0125 01:57:52.532206 1 nfd-worker.go:484] feature discovery completed I0125 01:57:52.532219 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:58:52.556106 1 nfd-worker.go:472] starting feature discovery... I0125 01:58:52.556219 1 nfd-worker.go:484] feature discovery completed I0125 01:58:52.556231 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:59:52.582365 1 nfd-worker.go:472] starting feature discovery... I0125 01:59:52.582472 1 nfd-worker.go:484] feature discovery completed I0125 01:59:52.582484 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:00:52.606258 1 nfd-worker.go:472] starting feature discovery... I0125 02:00:52.606368 1 nfd-worker.go:484] feature discovery completed I0125 02:00:52.606382 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:01:52.631045 1 nfd-worker.go:472] starting feature discovery... I0125 02:01:52.631158 1 nfd-worker.go:484] feature discovery completed I0125 02:01:52.631170 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:02:52.659479 1 nfd-worker.go:472] starting feature discovery... I0125 02:02:52.659596 1 nfd-worker.go:484] feature discovery completed I0125 02:02:52.659624 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:03:52.684727 1 nfd-worker.go:472] starting feature discovery... I0125 02:03:52.684835 1 nfd-worker.go:484] feature discovery completed I0125 02:03:52.684849 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:04:52.710223 1 nfd-worker.go:472] starting feature discovery... I0125 02:04:52.710333 1 nfd-worker.go:484] feature discovery completed I0125 02:04:52.710345 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:05:52.741106 1 nfd-worker.go:472] starting feature discovery... I0125 02:05:52.741385 1 nfd-worker.go:484] feature discovery completed I0125 02:05:52.741400 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:06:52.763663 1 nfd-worker.go:472] starting feature discovery... I0125 02:06:52.763773 1 nfd-worker.go:484] feature discovery completed I0125 02:06:52.763785 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:07:52.788477 1 nfd-worker.go:472] starting feature discovery... I0125 02:07:52.788590 1 nfd-worker.go:484] feature discovery completed I0125 02:07:52.788601 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:08:52.816571 1 nfd-worker.go:472] starting feature discovery... I0125 02:08:52.816682 1 nfd-worker.go:484] feature discovery completed I0125 02:08:52.816694 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:09:52.851208 1 nfd-worker.go:472] starting feature discovery... I0125 02:09:52.851327 1 nfd-worker.go:484] feature discovery completed I0125 02:09:52.851341 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:10:52.874493 1 nfd-worker.go:472] starting feature discovery... I0125 02:10:52.874604 1 nfd-worker.go:484] feature discovery completed I0125 02:10:52.874616 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:11:52.898865 1 nfd-worker.go:472] starting feature discovery... I0125 02:11:52.898974 1 nfd-worker.go:484] feature discovery completed I0125 02:11:52.898986 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:12:52.924415 1 nfd-worker.go:472] starting feature discovery... I0125 02:12:52.924533 1 nfd-worker.go:484] feature discovery completed I0125 02:12:52.924545 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:13:52.949408 1 nfd-worker.go:472] starting feature discovery... I0125 02:13:52.949521 1 nfd-worker.go:484] feature discovery completed I0125 02:13:52.949533 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:14:52.975528 1 nfd-worker.go:472] starting feature discovery... I0125 02:14:52.975650 1 nfd-worker.go:484] feature discovery completed I0125 02:14:52.975663 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:15:52.999586 1 nfd-worker.go:472] starting feature discovery... I0125 02:15:52.999708 1 nfd-worker.go:484] feature discovery completed I0125 02:15:52.999720 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:16:53.024352 1 nfd-worker.go:472] starting feature discovery... I0125 02:16:53.024466 1 nfd-worker.go:484] feature discovery completed I0125 02:16:53.024478 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:17:53.048583 1 nfd-worker.go:472] starting feature discovery... I0125 02:17:53.048692 1 nfd-worker.go:484] feature discovery completed I0125 02:17:53.048704 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:18:53.073683 1 nfd-worker.go:472] starting feature discovery... I0125 02:18:53.073791 1 nfd-worker.go:484] feature discovery completed I0125 02:18:53.073803 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:19:53.096913 1 nfd-worker.go:472] starting feature discovery... I0125 02:19:53.097025 1 nfd-worker.go:484] feature discovery completed I0125 02:19:53.097037 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:20:53.120457 1 nfd-worker.go:472] starting feature discovery... I0125 02:20:53.120645 1 nfd-worker.go:484] feature discovery completed I0125 02:20:53.120662 1 nfd-worker.go:565] sending labeling request to nfd-master Logs for pod gpu-operator-node-feature-discovery-worker-n2jgl: I0125 01:54:13.715272 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 01:54:13.715358 1 nfd-worker.go:156] NodeName: 'capz-e2e-plb7mq-gpu-control-plane-fzzxp' I0125 01:54:13.716430 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 01:54:13.717771 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 01:54:13.718410 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 01:54:13.718528 1 component.go:36] [core]parsed scheme: "" I0125 01:54:13.718543 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 01:54:13.718622 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 01:54:13.718639 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 01:54:13.718643 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 01:54:13.718863 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 01:54:13.718967 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 01:54:13.719998 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 01:54:13.744678 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 01:54:13.744702 1 component.go:36] [core]Channel Connectivity change to READY I0125 01:54:13.772659 1 nfd-worker.go:472] starting feature discovery... I0125 01:54:13.783953 1 nfd-worker.go:484] feature discovery completed I0125 01:54:13.784103 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:55:13.845054 1 nfd-worker.go:472] starting feature discovery... I0125 01:55:13.847082 1 nfd-worker.go:484] feature discovery completed I0125 01:55:13.847177 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:56:13.895638 1 nfd-worker.go:472] starting feature discovery... I0125 01:56:13.895969 1 nfd-worker.go:484] feature discovery completed I0125 01:56:13.895992 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:57:13.928066 1 nfd-worker.go:472] starting feature discovery... I0125 01:57:13.928347 1 nfd-worker.go:484] feature discovery completed I0125 01:57:13.928363 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:58:13.958858 1 nfd-worker.go:472] starting feature discovery... I0125 01:58:13.959137 1 nfd-worker.go:484] feature discovery completed I0125 01:58:13.959152 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 01:59:13.985199 1 nfd-worker.go:472] starting feature discovery... I0125 01:59:13.985505 1 nfd-worker.go:484] feature discovery completed I0125 01:59:13.985521 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:00:14.014534 1 nfd-worker.go:472] starting feature discovery... I0125 02:00:14.014886 1 nfd-worker.go:484] feature discovery completed I0125 02:00:14.014902 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:01:14.047738 1 nfd-worker.go:472] starting feature discovery... I0125 02:01:14.047956 1 nfd-worker.go:484] feature discovery completed I0125 02:01:14.047972 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:02:14.082123 1 nfd-worker.go:472] starting feature discovery... I0125 02:02:14.082268 1 nfd-worker.go:484] feature discovery completed I0125 02:02:14.082282 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:03:14.105926 1 nfd-worker.go:472] starting feature discovery... I0125 02:03:14.106232 1 nfd-worker.go:484] feature discovery completed I0125 02:03:14.106247 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:04:14.134165 1 nfd-worker.go:472] starting feature discovery... I0125 02:04:14.134453 1 nfd-worker.go:484] feature discovery completed I0125 02:04:14.134468 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:05:14.162317 1 nfd-worker.go:472] starting feature discovery... I0125 02:05:14.162774 1 nfd-worker.go:484] feature discovery completed I0125 02:05:14.162828 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:06:14.191808 1 nfd-worker.go:472] starting feature discovery... I0125 02:06:14.192133 1 nfd-worker.go:484] feature discovery completed I0125 02:06:14.192146 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:07:14.220460 1 nfd-worker.go:472] starting feature discovery... I0125 02:07:14.220678 1 nfd-worker.go:484] feature discovery completed I0125 02:07:14.220693 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:08:14.245131 1 nfd-worker.go:472] starting feature discovery... I0125 02:08:14.245541 1 nfd-worker.go:484] feature discovery completed I0125 02:08:14.245774 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:09:14.274246 1 nfd-worker.go:472] starting feature discovery... I0125 02:09:14.274558 1 nfd-worker.go:484] feature discovery completed I0125 02:09:14.274573 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:10:14.302971 1 nfd-worker.go:472] starting feature discovery... I0125 02:10:14.303140 1 nfd-worker.go:484] feature discovery completed I0125 02:10:14.303153 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:11:14.326106 1 nfd-worker.go:472] starting feature discovery... I0125 02:11:14.326288 1 nfd-worker.go:484] feature discovery completed I0125 02:11:14.326338 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:12:14.359646 1 nfd-worker.go:472] starting feature discovery... I0125 02:12:14.359920 1 nfd-worker.go:484] feature discovery completed I0125 02:12:14.359936 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:13:14.383712 1 nfd-worker.go:472] starting feature discovery... I0125 02:13:14.383857 1 nfd-worker.go:484] feature discovery completed I0125 02:13:14.383871 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:14:14.410611 1 nfd-worker.go:472] starting feature discovery... I0125 02:14:14.410924 1 nfd-worker.go:484] feature discovery completed I0125 02:14:14.410941 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:15:14.435204 1 nfd-worker.go:472] starting feature discovery... I0125 02:15:14.435616 1 nfd-worker.go:484] feature discovery completed I0125 02:15:14.435725 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:16:14.463883 1 nfd-worker.go:472] starting feature discovery... I0125 02:16:14.464023 1 nfd-worker.go:484] feature discovery completed I0125 02:16:14.464032 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:17:14.488852 1 nfd-worker.go:472] starting feature discovery... I0125 02:17:14.489001 1 nfd-worker.go:484] feature discovery completed I0125 02:17:14.489015 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:18:14.521571 1 nfd-worker.go:472] starting feature discovery... I0125 02:18:14.521874 1 nfd-worker.go:484] feature discovery completed I0125 02:18:14.521889 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:19:14.548354 1 nfd-worker.go:472] starting feature discovery... I0125 02:19:14.548540 1 nfd-worker.go:484] feature discovery completed I0125 02:19:14.548555 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 02:20:14.571647 1 nfd-worker.go:472] starting feature discovery... I0125 02:20:14.572137 1 nfd-worker.go:484] feature discovery completed I0125 02:20:14.572158 1 nfd-worker.go:565] sending labeling request to nfd-master Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/25/23 02:21:08.678 < Exit [It] with a single control plane node and 1 node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506 @ 01/25/23 02:21:08.678 (34m40.096s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/25/23 02:21:08.678 Jan 25 02:21:08.678: INFO: FAILED! Jan 25 02:21:08.678: INFO: Cleaning up after "Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node" spec STEP: Dumping logs from the "capz-e2e-plb7mq-gpu" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 02:21:08.678 Jan 25 02:21:08.678: INFO: Dumping workload cluster capz-e2e-plb7mq/capz-e2e-plb7mq-gpu logs Jan 25 02:21:08.738: INFO: Collecting logs for Linux node capz-e2e-plb7mq-gpu-control-plane-fzzxp in cluster capz-e2e-plb7mq-gpu in namespace capz-e2e-plb7mq Jan 25 02:21:31.673: INFO: Collecting boot logs for AzureMachine capz-e2e-plb7mq-gpu-control-plane-fzzxp Jan 25 02:21:33.578: INFO: Collecting logs for Linux node capz-e2e-plb7mq-gpu-md-0-fngb8 in cluster capz-e2e-plb7mq-gpu in namespace capz-e2e-plb7mq Jan 25 02:21:43.932: INFO: Collecting boot logs for AzureMachine capz-e2e-plb7mq-gpu-md-0-fngb8 Jan 25 02:21:44.671: INFO: Dumping workload cluster capz-e2e-plb7mq/capz-e2e-plb7mq-gpu kube-system pod logs Jan 25 02:21:45.468: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-57bd7f8f6-4jrpw, container calico-apiserver Jan 25 02:21:45.468: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-57bd7f8f6-4jrpw Jan 25 02:21:45.468: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-57bd7f8f6-vtsd7, container calico-apiserver Jan 25 02:21:45.469: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-57bd7f8f6-vtsd7 Jan 25 02:21:45.594: INFO: Creating log watcher for controller calico-system/calico-typha-7969cfd5bd-8pgp9, container calico-typha Jan 25 02:21:45.595: INFO: Collecting events for Pod calico-system/csi-node-driver-7k8qt Jan 25 02:21:45.595: INFO: Creating log watcher for controller calico-system/csi-node-driver-hwcw8, container calico-csi Jan 25 02:21:45.596: INFO: Creating log watcher for controller calico-system/calico-node-264wn, container calico-node Jan 25 02:21:45.596: INFO: Collecting events for Pod calico-system/calico-typha-7969cfd5bd-8pgp9 Jan 25 02:21:45.596: INFO: Creating log watcher for controller calico-system/csi-node-driver-7k8qt, container calico-csi Jan 25 02:21:45.596: INFO: Creating log watcher for controller calico-system/csi-node-driver-7k8qt, container csi-node-driver-registrar Jan 25 02:21:45.596: INFO: Collecting events for Pod calico-system/calico-kube-controllers-5f9dc85578-5l257 Jan 25 02:21:45.596: INFO: Creating log watcher for controller calico-system/calico-node-rwj2t, container calico-node Jan 25 02:21:45.596: INFO: Collecting events for Pod calico-system/calico-node-rwj2t Jan 25 02:21:45.596: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-5f9dc85578-5l257, container calico-kube-controllers Jan 25 02:21:45.599: INFO: Creating log watcher for controller calico-system/csi-node-driver-hwcw8, container csi-node-driver-registrar Jan 25 02:21:45.599: INFO: Collecting events for Pod calico-system/csi-node-driver-hwcw8 Jan 25 02:21:45.599: INFO: Collecting events for Pod calico-system/calico-node-264wn Jan 25 02:21:45.713: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-bcf6cd75d-6b8tn Jan 25 02:21:45.713: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-worker-bbnrb, container worker Jan 25 02:21:45.713: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-worker-bbnrb Jan 25 02:21:45.713: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-bcf6cd75d-6b8tn, container gpu-operator Jan 25 02:21:45.713: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-worker-n2jgl, container worker Jan 25 02:21:45.713: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-master-77bc558fdc-zd7rh, container master Jan 25 02:21:45.713: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-master-77bc558fdc-zd7rh Jan 25 02:21:45.713: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-worker-n2jgl Jan 25 02:21:45.871: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-txtdk, container csi-provisioner Jan 25 02:21:45.871: INFO: Collecting events for Pod kube-system/coredns-565d847f94-829gr Jan 25 02:21:45.871: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-829gr, container coredns Jan 25 02:21:45.872: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-rvptv, container azuredisk Jan 25 02:21:45.872: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-6dbd9768d6-txtdk Jan 25 02:21:45.872: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-txtdk, container csi-attacher Jan 25 02:21:45.873: INFO: Collecting events for Pod kube-system/coredns-565d847f94-hvr9g Jan 25 02:21:45.873: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-4j7vd, container liveness-probe Jan 25 02:21:45.873: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-4j7vd Jan 25 02:21:45.873: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-txtdk, container csi-resizer Jan 25 02:21:45.873: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-rvptv Jan 25 02:21:45.873: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-4j7vd, container node-driver-registrar Jan 25 02:21:45.873: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-plb7mq-gpu-control-plane-fzzxp, container etcd Jan 25 02:21:45.874: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-rvptv, container liveness-probe Jan 25 02:21:45.874: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-txtdk, container liveness-probe Jan 25 02:21:45.874: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-4j7vd, container azuredisk Jan 25 02:21:45.874: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-txtdk, container csi-snapshotter Jan 25 02:21:45.874: INFO: Collecting events for Pod kube-system/etcd-capz-e2e-plb7mq-gpu-control-plane-fzzxp Jan 25 02:21:45.875: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-rvptv, container node-driver-registrar Jan 25 02:21:45.875: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-plb7mq-gpu-control-plane-fzzxp, container kube-apiserver Jan 25 02:21:45.875: INFO: Collecting events for Pod kube-system/kube-proxy-p9chw Jan 25 02:21:45.875: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-txtdk, container azuredisk Jan 25 02:21:45.875: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-hvr9g, container coredns Jan 25 02:21:45.875: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-plb7mq-gpu-control-plane-fzzxp, container kube-controller-manager Jan 25 02:21:45.875: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-plb7mq-gpu-control-plane-fzzxp Jan 25 02:21:45.875: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-plb7mq-gpu-control-plane-fzzxp Jan 25 02:21:45.875: INFO: Creating log watcher for controller kube-system/kube-proxy-p9chw, container kube-proxy Jan 25 02:21:45.875: INFO: Collecting events for Pod kube-system/kube-proxy-v9w5p Jan 25 02:21:45.875: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-plb7mq-gpu-control-plane-fzzxp, container kube-scheduler Jan 25 02:21:45.875: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-plb7mq-gpu-control-plane-fzzxp Jan 25 02:21:45.875: INFO: Creating log watcher for controller kube-system/kube-proxy-v9w5p, container kube-proxy Jan 25 02:21:45.990: INFO: Fetching kube-system pod logs took 1.318165568s Jan 25 02:21:45.990: INFO: Dumping workload cluster capz-e2e-plb7mq/capz-e2e-plb7mq-gpu Azure activity log Jan 25 02:21:45.990: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-rb9l4, container tigera-operator Jan 25 02:21:45.990: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-rb9l4 Jan 25 02:21:50.049: INFO: Fetching activity logs took 4.059658595s Jan 25 02:21:50.049: INFO: Dumping all the Cluster API resources in the "capz-e2e-plb7mq" namespace Jan 25 02:21:50.380: INFO: Deleting all clusters in the capz-e2e-plb7mq namespace STEP: Deleting cluster capz-e2e-plb7mq-gpu - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 02:21:50.397 INFO: Waiting for the Cluster capz-e2e-plb7mq/capz-e2e-plb7mq-gpu to be deleted STEP: Waiting for cluster capz-e2e-plb7mq-gpu to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 02:21:50.413 Jan 25 02:28:30.631: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-plb7mq Jan 25 02:28:30.647: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/25/23 02:28:31.183 INFO: "with a single control plane node and 1 node" started at Wed, 25 Jan 2023 02:29:43 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/25/23 02:29:43.939 (8m35.262s)
Find gpu-operator-node-feature-discovery-master-77bc558fdc-zd7rh mentions in log files | View test history on testgrid
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 620 lines ... [38;5;243m------------------------------[0m [38;5;10m• [944.994 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a Flatcar cluster [OPTIONAL] [38;5;10m[1mWith Flatcar control-plane and worker nodes[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:321[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-q8kecj):namespaces "capz-e2e-q8kecj" not found cluster.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-q8kecj-flatcar-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine capz-e2e-q8kecj-flatcar-control-plane-mnjxv, Cluster capz-e2e-q8kecj/capz-e2e-q8kecj-flatcar: [dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:54292->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:54308->20.242.177.37:22: read: connection reset by peer] Failed to get logs for Machine capz-e2e-q8kecj-flatcar-md-0-7b64786657-5qk2d, Cluster capz-e2e-q8kecj/capz-e2e-q8kecj-flatcar: [dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41644->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41648->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41658->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41652->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41654->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41650->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41660->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41656->20.242.177.37:22: read: connection reset by peer, dialing public load balancer at capz-e2e-q8kecj-flatcar-a4881c85.eastus.cloudapp.azure.com: ssh: handshake failed: read tcp 10.60.63.62:41664->20.242.177.37:22: read: connection reset by peer] [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Wed, 25 Jan 2023 01:46:28 UTC on Ginkgo node 4 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-q8kecj" for hosting the cluster [38;5;243m@ 01/25/23 01:46:28.22[0m Jan 25 01:46:28.220: INFO: starting to create namespace for hosting the "capz-e2e-q8kecj" test spec ... skipping 157 lines ... [38;5;243m------------------------------[0m [38;5;10m• [971.154 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a cluster that uses the external cloud provider and machinepools [OPTIONAL] [38;5;10m[1mwith 1 control plane node and 1 machinepool[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:573[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-6fan57):namespaces "capz-e2e-6fan57" not found cluster.cluster.x-k8s.io/capz-e2e-6fan57-flex created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-6fan57-flex created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-6fan57-flex-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-6fan57-flex-control-plane created machinepool.cluster.x-k8s.io/capz-e2e-6fan57-flex-mp-0 created azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-6fan57-flex-mp-0 created ... skipping 2 lines ... felixconfiguration.crd.projectcalico.org/default configured W0125 01:55:12.184736 36968 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning 2023/01/25 01:55:52 [DEBUG] GET http://20.124.140.220 W0125 01:56:26.159663 36968 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning Failed to get logs for MachinePool capz-e2e-6fan57-flex-mp-0, Cluster capz-e2e-6fan57/capz-e2e-6fan57-flex: Unable to collect VMSS Boot Diagnostic logs: failed to parse resource id: parsing failed for /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-e2e-6fan57-flex/providers/Microsoft.Compute. Invalid resource Id format [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Wed, 25 Jan 2023 01:46:28 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-6fan57" for hosting the cluster [38;5;243m@ 01/25/23 01:46:28.228[0m Jan 25 01:46:28.228: INFO: starting to create namespace for hosting the "capz-e2e-6fan57" test spec ... skipping 229 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1290.217 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a dual-stack cluster [OPTIONAL] [38;5;10m[1mWith dual-stack worker node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:830[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-3zij53):namespaces "capz-e2e-3zij53" not found cluster.cluster.x-k8s.io/capz-e2e-3zij53-dual-stack created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-3zij53-dual-stack created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-3zij53-dual-stack-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-3zij53-dual-stack-control-plane created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created machinedeployment.cluster.x-k8s.io/capz-e2e-3zij53-dual-stack-md-0 created ... skipping 330 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1294.125 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] [38;5;10m[1mwith a 1 control plane nodes and 2 worker nodes[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:637[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-pb2ieg):namespaces "capz-e2e-pb2ieg" not found cluster.cluster.x-k8s.io/capz-e2e-pb2ieg-oot created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-pb2ieg-oot created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-pb2ieg-oot-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-pb2ieg-oot-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-pb2ieg-oot-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-pb2ieg-oot-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-pb2ieg-oot-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created felixconfiguration.crd.projectcalico.org/default configured W0125 01:56:00.455064 36990 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning 2023/01/25 01:57:51 [DEBUG] GET http://20.124.141.23 2023/01/25 01:58:21 [ERR] GET http://20.124.141.23 request failed: Get "http://20.124.141.23": dial tcp 20.124.141.23:80: i/o timeout 2023/01/25 01:58:21 [DEBUG] GET http://20.124.141.23: retrying in 1s (4 left) 2023/01/25 01:58:52 [ERR] GET http://20.124.141.23 request failed: Get "http://20.124.141.23": dial tcp 20.124.141.23:80: i/o timeout 2023/01/25 01:58:52 [DEBUG] GET http://20.124.141.23: retrying in 2s (3 left) W0125 01:59:30.602672 36990 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Wed, 25 Jan 2023 01:46:28 UTC on Ginkgo node 9 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml ... skipping 275 lines ... [38;5;243m------------------------------[0m [38;5;10m• [1370.390 seconds][0m [0mWorkload cluster creation [38;5;243mCreating clusters using clusterclass [OPTIONAL] [38;5;10m[1mwith a single control plane node, one linux worker node, and one windows worker node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:906[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-yq798v):namespaces "capz-e2e-yq798v" not found clusterclass.cluster.x-k8s.io/ci-default created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/ci-default-kubeadm-control-plane created azureclustertemplate.infrastructure.cluster.x-k8s.io/ci-default-azure-cluster created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-control-plane created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker created ... skipping 5 lines ... clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created configmap/cni-capz-e2e-yq798v-cc-calico-windows created configmap/csi-proxy-addon created felixconfiguration.crd.projectcalico.org/default created Failed to get logs for Machine capz-e2e-yq798v-cc-2dmkx-v56dj, Cluster capz-e2e-yq798v/capz-e2e-yq798v-cc: dialing public load balancer at capz-e2e-yq798v-cc-eb500a1.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-yq798v-cc-md-0-zj6fk-f866858cb-h8c6m, Cluster capz-e2e-yq798v/capz-e2e-yq798v-cc: dialing public load balancer at capz-e2e-yq798v-cc-eb500a1.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-yq798v-cc-md-win-6p6hs-5ff7b95785-84zrz, Cluster capz-e2e-yq798v/capz-e2e-yq798v-cc: dialing public load balancer at capz-e2e-yq798v-cc-eb500a1.eastus.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m INFO: "" started at Wed, 25 Jan 2023 01:46:28 UTC on Ginkgo node 2 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [1mSTEP:[0m Creating namespace "capz-e2e-yq798v" for hosting the cluster [38;5;243m@ 01/25/23 01:46:28.232[0m Jan 25 01:46:28.232: INFO: starting to create namespace for hosting the "capz-e2e-yq798v" test spec ... skipping 183 lines ... Jan 25 02:03:20.179: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-6tf6k, container azuredisk Jan 25 02:03:20.179: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-6tf6k Jan 25 02:03:20.228: INFO: Fetching kube-system pod logs took 545.148328ms Jan 25 02:03:20.228: INFO: Dumping workload cluster capz-e2e-yq798v/capz-e2e-yq798v-cc Azure activity log Jan 25 02:03:20.229: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-n2t58, container tigera-operator Jan 25 02:03:20.229: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-n2t58 Jan 25 02:03:20.251: INFO: Error fetching activity logs for cluster capz-e2e-yq798v-cc in namespace capz-e2e-yq798v. Not able to find the AzureManagedControlPlane on the management cluster: azuremanagedcontrolplanes.infrastructure.cluster.x-k8s.io "capz-e2e-yq798v-cc" not found Jan 25 02:03:20.251: INFO: Fetching activity logs took 22.634135ms Jan 25 02:03:20.251: INFO: Dumping all the Cluster API resources in the "capz-e2e-yq798v" namespace Jan 25 02:03:20.625: INFO: Deleting all clusters in the capz-e2e-yq798v namespace [1mSTEP:[0m Deleting cluster capz-e2e-yq798v-cc [38;5;243m@ 01/25/23 02:03:20.647[0m INFO: Waiting for the Cluster capz-e2e-yq798v/capz-e2e-yq798v-cc to be deleted [1mSTEP:[0m Waiting for cluster capz-e2e-yq798v-cc to be deleted [38;5;243m@ 01/25/23 02:03:20.664[0m ... skipping 5 lines ... [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;10m[SynchronizedAfterSuite] PASSED [0.000 seconds][0m [38;5;10m[1m[SynchronizedAfterSuite] [0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:116[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [2595.718 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a GPU-enabled cluster [OPTIONAL] [38;5;9m[1m[It] with a single control plane node and 1 node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-plb7mq):namespaces "capz-e2e-plb7mq" not found cluster.cluster.x-k8s.io/capz-e2e-plb7mq-gpu serverside-applied azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-plb7mq-gpu serverside-applied kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-control-plane serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-control-plane serverside-applied azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp serverside-applied machinedeployment.cluster.x-k8s.io/capz-e2e-plb7mq-gpu-md-0 serverside-applied ... skipping 109 lines ... [1mSTEP:[0m Verifying specified VM extensions are created on Azure [38;5;243m@ 01/25/23 01:56:07.304[0m [1mSTEP:[0m Retrieving all machine pools from the machine template spec [38;5;243m@ 01/25/23 01:56:08.163[0m Jan 25 01:56:08.163: INFO: Listing machine pools in namespace capz-e2e-plb7mq with label cluster.x-k8s.io/cluster-name=capz-e2e-plb7mq-gpu [1mSTEP:[0m Running a GPU-based calculation [38;5;243m@ 01/25/23 01:56:08.167[0m [1mSTEP:[0m creating a Kubernetes client to the workload cluster [38;5;243m@ 01/25/23 01:56:08.167[0m [1mSTEP:[0m Waiting for a node to have an "nvidia.com/gpu" allocatable resource [38;5;243m@ 01/25/23 01:56:08.194[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 [38;5;243m@ 01/25/23 02:21:08.678[0m Jan 25 02:21:08.678: INFO: FAILED! Jan 25 02:21:08.678: INFO: Cleaning up after "Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node" spec [1mSTEP:[0m Dumping logs from the "capz-e2e-plb7mq-gpu" workload cluster [38;5;243m@ 01/25/23 02:21:08.678[0m Jan 25 02:21:08.678: INFO: Dumping workload cluster capz-e2e-plb7mq/capz-e2e-plb7mq-gpu logs Jan 25 02:21:08.738: INFO: Collecting logs for Linux node capz-e2e-plb7mq-gpu-control-plane-fzzxp in cluster capz-e2e-plb7mq-gpu in namespace capz-e2e-plb7mq Jan 25 02:21:31.673: INFO: Collecting boot logs for AzureMachine capz-e2e-plb7mq-gpu-control-plane-fzzxp ... skipping 74 lines ... INFO: Deleting namespace capz-e2e-plb7mq Jan 25 02:28:30.647: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/25/23 02:28:31.183[0m INFO: "with a single control plane node and 1 node" started at Wed, 25 Jan 2023 02:29:43 UTC on Ginkgo node 10 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 1500.001s. Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-zd7rh: I0125 01:54:02.020848 1 nfd-master.go:170] Node Feature Discovery Master v0.10.1 I0125 01:54:02.021405 1 nfd-master.go:174] NodeName: "capz-e2e-plb7mq-gpu-control-plane-fzzxp" I0125 01:54:02.021418 1 nfd-master.go:185] starting nfd LabelRule controller I0125 01:54:02.049822 1 nfd-master.go:226] gRPC server serving on port: 8080 ... skipping 267 lines ... [38;5;243m------------------------------[0m [38;5;10m• [3752.726 seconds][0m [0mWorkload cluster creation [38;5;243mCreating a private cluster [OPTIONAL] [38;5;10m[1mCreates a public management cluster in a custom vnet[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:156[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m 2023/01/25 01:46:28 failed trying to get namespace (capz-e2e-puwq3y):namespaces "capz-e2e-puwq3y" not found cluster.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-puwq3y-public-custom-vnet-md-0 created ... skipping 247 lines ... Jan 25 02:42:28.678: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-7wxp8, container node-driver-registrar Jan 25 02:42:28.678: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-ff6zq Jan 25 02:42:28.707: INFO: Fetching kube-system pod logs took 528.109391ms Jan 25 02:42:28.707: INFO: Dumping workload cluster capz-e2e-puwq3y/capz-e2e-puwq3y-public-custom-vnet Azure activity log Jan 25 02:42:28.707: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-pw9gs, container tigera-operator Jan 25 02:42:28.707: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-pw9gs Jan 25 02:42:35.554: INFO: Got error while iterating over activity logs for resource group capz-e2e-puwq3y-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure responding to next results request: StatusCode=404 -- Original Error: autorest/azure: error response cannot be parsed: {"<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">\r\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\r\n<head>\r\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=iso-8859-1\"/>\r\n<title>404 - File or directory not found.</title>\r\n<style type=\"text/css\">\r\n<!--\r\nbody{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica, sans-serif;background:#EEEEEE;}\r\nfieldset{padding:0 15px 10px 15px;} \r\nh1{font-size:2.4em;margin:0;color:#FFF;}\r\nh2{font-si" '\x00' '\x00'} error: invalid character '<' looking for beginning of value Jan 25 02:42:35.554: INFO: Fetching activity logs took 6.847563801s Jan 25 02:42:35.554: INFO: Dumping all the Cluster API resources in the "capz-e2e-puwq3y" namespace Jan 25 02:42:35.922: INFO: Deleting all clusters in the capz-e2e-puwq3y namespace [1mSTEP:[0m Deleting cluster capz-e2e-puwq3y-public-custom-vnet [38;5;243m@ 01/25/23 02:42:35.945[0m INFO: Waiting for the Cluster capz-e2e-puwq3y/capz-e2e-puwq3y-public-custom-vnet to be deleted [1mSTEP:[0m Waiting for cluster capz-e2e-puwq3y-public-custom-vnet to be deleted [38;5;243m@ 01/25/23 02:42:35.969[0m INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-749ff5bffd-ntb87, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-6f7b75f796-ch5ss, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-687b6fd9bc-7n8c7, container manager: http2: client connection lost INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-669bd95bbb-vhkls, container manager: http2: client connection lost Jan 25 02:45:36.068: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-puwq3y Jan 25 02:45:36.087: INFO: Running additional cleanup for the "create-workload-cluster" test spec Jan 25 02:45:36.087: INFO: deleting an existing virtual network "custom-vnet" Jan 25 02:45:46.877: INFO: deleting an existing route table "node-routetable" Jan 25 02:45:49.241: INFO: deleting an existing network security group "node-nsg" ... skipping 16 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.016 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mWorkload cluster creation [38;5;243mCreating a GPU-enabled cluster [OPTIONAL] [38;5;9m[1m[It] with a single control plane node and 1 node[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80[0m [38;5;9m[1mRan 7 of 27 Specs in 3884.592 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m6 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m20 Skipped[0m [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:429[0m ... skipping 85 lines ... [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:285[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:429[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.7.0[0m --- FAIL: TestE2E (2726.24s) FAIL Ginkgo ran 1 suite in 1h7m7.771356597s Test Suite Failed make[1]: *** [Makefile:654: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:663: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...