Recent runs || View in Spyglass
PR | jackfrancis: Update default k8s version to v1.25 for testing |
Result | FAILURE |
Tests | 2 failed / 25 succeeded |
Started | |
Elapsed | 1h3m |
Revision | aa4b89f70338b5bf172b792cbe9a26a0f73595d6 |
Refs |
3088 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
[FAILED] Timed out after 1500.000s. Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-pbptg: I0125 18:43:43.601537 1 nfd-master.go:170] Node Feature Discovery Master v0.10.1 I0125 18:43:43.601610 1 nfd-master.go:174] NodeName: "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:43:43.601616 1 nfd-master.go:185] starting nfd LabelRule controller I0125 18:43:43.629745 1 nfd-master.go:226] gRPC server serving on port: 8080 I0125 18:44:32.243729 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:45:10.819049 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:45:32.307830 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:46:10.910243 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:46:32.341801 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:47:10.935507 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:47:32.378791 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:48:10.961269 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:48:32.415628 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:49:10.994796 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:49:32.444874 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:50:11.020573 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:50:32.474692 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:51:11.048646 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:51:32.515715 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:52:11.082233 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:52:32.542640 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:53:11.107359 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:53:32.570066 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:54:11.138777 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:54:32.602384 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:55:11.173527 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:55:32.624709 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:56:11.215965 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:56:32.661727 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:57:11.252342 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:57:32.698637 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:58:11.278769 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:58:32.739572 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:59:11.306207 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:59:32.767982 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:00:11.333050 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:00:32.794502 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:01:11.358959 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:01:32.829342 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:02:11.388332 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:02:32.857402 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:03:11.417961 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:03:32.881142 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:04:11.445727 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:04:32.911399 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:05:11.473287 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:05:32.938115 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:06:11.500694 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:06:32.970895 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:07:11.526380 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:07:32.998628 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:08:11.551182 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:08:33.027091 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:09:11.578081 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:09:33.053734 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:10:11.604491 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:10:33.087624 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" Logs for pod gpu-operator-node-feature-discovery-worker-dmfl7: I0125 18:44:32.091655 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 18:44:32.091894 1 nfd-worker.go:156] NodeName: 'capz-e2e-7dm7pj-gpu-control-plane-89lrz' I0125 18:44:32.093969 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 18:44:32.094209 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 18:44:32.096564 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 18:44:32.100058 1 component.go:36] [core]parsed scheme: "" I0125 18:44:32.100129 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 18:44:32.100362 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 18:44:32.100388 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 18:44:32.100394 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 18:44:32.100506 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 18:44:32.101457 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 18:44:32.150550 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 18:44:32.161849 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 18:44:32.162023 1 component.go:36] [core]Channel Connectivity change to READY I0125 18:44:32.194107 1 nfd-worker.go:472] starting feature discovery... I0125 18:44:32.195155 1 nfd-worker.go:484] feature discovery completed I0125 18:44:32.195304 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:45:32.293723 1 nfd-worker.go:472] starting feature discovery... I0125 18:45:32.294912 1 nfd-worker.go:484] feature discovery completed I0125 18:45:32.294934 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:46:32.330486 1 nfd-worker.go:472] starting feature discovery... I0125 18:46:32.330828 1 nfd-worker.go:484] feature discovery completed I0125 18:46:32.330847 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:47:32.364667 1 nfd-worker.go:472] starting feature discovery... I0125 18:47:32.364910 1 nfd-worker.go:484] feature discovery completed I0125 18:47:32.364927 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:48:32.403443 1 nfd-worker.go:472] starting feature discovery... I0125 18:48:32.403835 1 nfd-worker.go:484] feature discovery completed I0125 18:48:32.403854 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:49:32.435429 1 nfd-worker.go:472] starting feature discovery... I0125 18:49:32.435607 1 nfd-worker.go:484] feature discovery completed I0125 18:49:32.435647 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:50:32.460558 1 nfd-worker.go:472] starting feature discovery... I0125 18:50:32.460743 1 nfd-worker.go:484] feature discovery completed I0125 18:50:32.460833 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:51:32.504877 1 nfd-worker.go:472] starting feature discovery... I0125 18:51:32.505110 1 nfd-worker.go:484] feature discovery completed I0125 18:51:32.505135 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:52:32.530675 1 nfd-worker.go:472] starting feature discovery... I0125 18:52:32.530879 1 nfd-worker.go:484] feature discovery completed I0125 18:52:32.530897 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:53:32.558805 1 nfd-worker.go:472] starting feature discovery... I0125 18:53:32.559169 1 nfd-worker.go:484] feature discovery completed I0125 18:53:32.559188 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:54:32.588328 1 nfd-worker.go:472] starting feature discovery... I0125 18:54:32.588668 1 nfd-worker.go:484] feature discovery completed I0125 18:54:32.588686 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:55:32.615763 1 nfd-worker.go:472] starting feature discovery... I0125 18:55:32.616033 1 nfd-worker.go:484] feature discovery completed I0125 18:55:32.616050 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:56:32.648709 1 nfd-worker.go:472] starting feature discovery... I0125 18:56:32.649092 1 nfd-worker.go:484] feature discovery completed I0125 18:56:32.649198 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:57:32.687160 1 nfd-worker.go:472] starting feature discovery... I0125 18:57:32.687321 1 nfd-worker.go:484] feature discovery completed I0125 18:57:32.687339 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:58:32.727079 1 nfd-worker.go:472] starting feature discovery... I0125 18:58:32.727351 1 nfd-worker.go:484] feature discovery completed I0125 18:58:32.727369 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:59:32.756691 1 nfd-worker.go:472] starting feature discovery... I0125 18:59:32.756993 1 nfd-worker.go:484] feature discovery completed I0125 18:59:32.757010 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:00:32.781914 1 nfd-worker.go:472] starting feature discovery... I0125 19:00:32.782340 1 nfd-worker.go:484] feature discovery completed I0125 19:00:32.782358 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:01:32.817440 1 nfd-worker.go:472] starting feature discovery... I0125 19:01:32.817714 1 nfd-worker.go:484] feature discovery completed I0125 19:01:32.817733 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:02:32.844879 1 nfd-worker.go:472] starting feature discovery... I0125 19:02:32.845273 1 nfd-worker.go:484] feature discovery completed I0125 19:02:32.845291 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:03:32.870224 1 nfd-worker.go:472] starting feature discovery... I0125 19:03:32.870542 1 nfd-worker.go:484] feature discovery completed I0125 19:03:32.870560 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:04:32.899315 1 nfd-worker.go:472] starting feature discovery... I0125 19:04:32.899624 1 nfd-worker.go:484] feature discovery completed I0125 19:04:32.899641 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:05:32.925766 1 nfd-worker.go:472] starting feature discovery... I0125 19:05:32.926101 1 nfd-worker.go:484] feature discovery completed I0125 19:05:32.926118 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:06:32.956827 1 nfd-worker.go:472] starting feature discovery... I0125 19:06:32.957164 1 nfd-worker.go:484] feature discovery completed I0125 19:06:32.957204 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:07:32.986407 1 nfd-worker.go:472] starting feature discovery... I0125 19:07:32.986669 1 nfd-worker.go:484] feature discovery completed I0125 19:07:32.986731 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:08:33.014442 1 nfd-worker.go:472] starting feature discovery... I0125 19:08:33.014683 1 nfd-worker.go:484] feature discovery completed I0125 19:08:33.014701 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:09:33.042697 1 nfd-worker.go:472] starting feature discovery... I0125 19:09:33.042984 1 nfd-worker.go:484] feature discovery completed I0125 19:09:33.043003 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:10:33.071082 1 nfd-worker.go:472] starting feature discovery... I0125 19:10:33.071284 1 nfd-worker.go:484] feature discovery completed I0125 19:10:33.071306 1 nfd-worker.go:565] sending labeling request to nfd-master Logs for pod gpu-operator-node-feature-discovery-worker-q54vc: I0125 18:45:10.781355 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 18:45:10.781425 1 nfd-worker.go:156] NodeName: 'capz-e2e-7dm7pj-gpu-md-0-xmfh7' I0125 18:45:10.781918 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 18:45:10.781992 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 18:45:10.782035 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 18:45:10.782075 1 component.go:36] [core]parsed scheme: "" I0125 18:45:10.782082 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 18:45:10.782097 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 18:45:10.782109 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 18:45:10.782113 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 18:45:10.782145 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 18:45:10.782172 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 18:45:10.782222 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 18:45:10.788387 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 18:45:10.788403 1 component.go:36] [core]Channel Connectivity change to READY I0125 18:45:10.797089 1 nfd-worker.go:472] starting feature discovery... I0125 18:45:10.797199 1 nfd-worker.go:484] feature discovery completed I0125 18:45:10.797210 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:46:10.896844 1 nfd-worker.go:472] starting feature discovery... I0125 18:46:10.896956 1 nfd-worker.go:484] feature discovery completed I0125 18:46:10.896969 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:47:10.926423 1 nfd-worker.go:472] starting feature discovery... I0125 18:47:10.926535 1 nfd-worker.go:484] feature discovery completed I0125 18:47:10.926547 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:48:10.951031 1 nfd-worker.go:472] starting feature discovery... I0125 18:48:10.951186 1 nfd-worker.go:484] feature discovery completed I0125 18:48:10.951200 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:49:10.981066 1 nfd-worker.go:472] starting feature discovery... I0125 18:49:10.981178 1 nfd-worker.go:484] feature discovery completed I0125 18:49:10.981191 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:50:11.010870 1 nfd-worker.go:472] starting feature discovery... I0125 18:50:11.010984 1 nfd-worker.go:484] feature discovery completed I0125 18:50:11.010997 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:51:11.038035 1 nfd-worker.go:472] starting feature discovery... I0125 18:51:11.038147 1 nfd-worker.go:484] feature discovery completed I0125 18:51:11.038159 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:52:11.071678 1 nfd-worker.go:472] starting feature discovery... I0125 18:52:11.071793 1 nfd-worker.go:484] feature discovery completed I0125 18:52:11.071806 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:53:11.097633 1 nfd-worker.go:472] starting feature discovery... I0125 18:53:11.097783 1 nfd-worker.go:484] feature discovery completed I0125 18:53:11.097799 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:54:11.128161 1 nfd-worker.go:472] starting feature discovery... I0125 18:54:11.128274 1 nfd-worker.go:484] feature discovery completed I0125 18:54:11.128286 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:55:11.164265 1 nfd-worker.go:472] starting feature discovery... I0125 18:55:11.164379 1 nfd-worker.go:484] feature discovery completed I0125 18:55:11.164393 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:56:11.206100 1 nfd-worker.go:472] starting feature discovery... I0125 18:56:11.206213 1 nfd-worker.go:484] feature discovery completed I0125 18:56:11.206226 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:57:11.241934 1 nfd-worker.go:472] starting feature discovery... I0125 18:57:11.242050 1 nfd-worker.go:484] feature discovery completed I0125 18:57:11.242063 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:58:11.269255 1 nfd-worker.go:472] starting feature discovery... I0125 18:58:11.269369 1 nfd-worker.go:484] feature discovery completed I0125 18:58:11.269382 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:59:11.296808 1 nfd-worker.go:472] starting feature discovery... I0125 18:59:11.296934 1 nfd-worker.go:484] feature discovery completed I0125 18:59:11.296947 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:00:11.322626 1 nfd-worker.go:472] starting feature discovery... I0125 19:00:11.322771 1 nfd-worker.go:484] feature discovery completed I0125 19:00:11.322785 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:01:11.349300 1 nfd-worker.go:472] starting feature discovery... I0125 19:01:11.349411 1 nfd-worker.go:484] feature discovery completed I0125 19:01:11.349424 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:02:11.378075 1 nfd-worker.go:472] starting feature discovery... I0125 19:02:11.378190 1 nfd-worker.go:484] feature discovery completed I0125 19:02:11.378203 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:03:11.407624 1 nfd-worker.go:472] starting feature discovery... I0125 19:03:11.407733 1 nfd-worker.go:484] feature discovery completed I0125 19:03:11.407746 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:04:11.435600 1 nfd-worker.go:472] starting feature discovery... I0125 19:04:11.435714 1 nfd-worker.go:484] feature discovery completed I0125 19:04:11.435726 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:05:11.463069 1 nfd-worker.go:472] starting feature discovery... I0125 19:05:11.463185 1 nfd-worker.go:484] feature discovery completed I0125 19:05:11.463199 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:06:11.491086 1 nfd-worker.go:472] starting feature discovery... I0125 19:06:11.491198 1 nfd-worker.go:484] feature discovery completed I0125 19:06:11.491210 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:07:11.515751 1 nfd-worker.go:472] starting feature discovery... I0125 19:07:11.515866 1 nfd-worker.go:484] feature discovery completed I0125 19:07:11.515878 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:08:11.541236 1 nfd-worker.go:472] starting feature discovery... I0125 19:08:11.541348 1 nfd-worker.go:484] feature discovery completed I0125 19:08:11.541360 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:09:11.568311 1 nfd-worker.go:472] starting feature discovery... I0125 19:09:11.568421 1 nfd-worker.go:484] feature discovery completed I0125 19:09:11.568434 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:10:11.594320 1 nfd-worker.go:472] starting feature discovery... I0125 19:10:11.594430 1 nfd-worker.go:484] feature discovery completed I0125 19:10:11.594443 1 nfd-worker.go:565] sending labeling request to nfd-master Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/25/23 19:10:40.209from junit.e2e_suite.1.xml
2023/01/25 18:38:12 failed trying to get namespace (capz-e2e-7dm7pj):namespaces "capz-e2e-7dm7pj" not found cluster.cluster.x-k8s.io/capz-e2e-7dm7pj-gpu serverside-applied azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-7dm7pj-gpu serverside-applied kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-7dm7pj-gpu-control-plane serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-7dm7pj-gpu-control-plane serverside-applied azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp serverside-applied machinedeployment.cluster.x-k8s.io/capz-e2e-7dm7pj-gpu-md-0 serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-7dm7pj-gpu-md-0 serverside-applied kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-7dm7pj-gpu-md-0 serverside-applied clusterresourceset.addons.cluster.x-k8s.io/crs-gpu-operator serverside-applied configmap/nvidia-clusterpolicy-crd serverside-applied configmap/nvidia-gpu-operator-components serverside-applied felixconfiguration.crd.projectcalico.org/default configured > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/25/23 18:38:12.415 INFO: "" started at Wed, 25 Jan 2023 18:38:12 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-7dm7pj" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:38:12.416 Jan 25 18:38:12.416: INFO: starting to create namespace for hosting the "capz-e2e-7dm7pj" test spec INFO: Creating namespace capz-e2e-7dm7pj INFO: Creating event watcher for namespace "capz-e2e-7dm7pj" Jan 25 18:38:12.566: INFO: Creating cluster identity secret "cluster-identity-secret" < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/25/23 18:38:12.636 (220ms) > Enter [It] with a single control plane node and 1 node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506 @ 01/25/23 18:38:12.636 INFO: Cluster name is capz-e2e-7dm7pj-gpu INFO: Creating the workload cluster with name "capz-e2e-7dm7pj-gpu" using the "nvidia-gpu" template (Kubernetes v1.25.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-7dm7pj-gpu --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor nvidia-gpu INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/25/23 18:38:17.559 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/25/23 18:40:17.698 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/25/23 18:40:17.698 Jan 25 18:42:18.081: INFO: getting history for release projectcalico Jan 25 18:42:18.185: INFO: Release projectcalico does not exist, installing it Jan 25 18:42:19.389: INFO: creating 1 resource(s) Jan 25 18:42:19.526: INFO: creating 1 resource(s) Jan 25 18:42:19.651: INFO: creating 1 resource(s) Jan 25 18:42:19.769: INFO: creating 1 resource(s) Jan 25 18:42:19.908: INFO: creating 1 resource(s) Jan 25 18:42:20.027: INFO: creating 1 resource(s) Jan 25 18:42:20.337: INFO: creating 1 resource(s) Jan 25 18:42:20.539: INFO: creating 1 resource(s) Jan 25 18:42:20.660: INFO: creating 1 resource(s) Jan 25 18:42:20.787: INFO: creating 1 resource(s) Jan 25 18:42:20.931: INFO: creating 1 resource(s) Jan 25 18:42:21.061: INFO: creating 1 resource(s) Jan 25 18:42:21.194: INFO: creating 1 resource(s) Jan 25 18:42:21.319: INFO: creating 1 resource(s) Jan 25 18:42:21.443: INFO: creating 1 resource(s) Jan 25 18:42:21.579: INFO: creating 1 resource(s) Jan 25 18:42:21.770: INFO: creating 1 resource(s) Jan 25 18:42:21.913: INFO: creating 1 resource(s) Jan 25 18:42:22.110: INFO: creating 1 resource(s) Jan 25 18:42:22.303: INFO: creating 1 resource(s) Jan 25 18:42:22.905: INFO: creating 1 resource(s) Jan 25 18:42:23.054: INFO: Clearing discovery cache Jan 25 18:42:23.054: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 25 18:42:28.678: INFO: creating 1 resource(s) Jan 25 18:42:29.453: INFO: creating 6 resource(s) Jan 25 18:42:30.713: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/25/23 18:42:31.531 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:42:31.949 Jan 25 18:42:31.949: INFO: starting to wait for deployment to become available Jan 25 18:42:42.156: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.207092334s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/25/23 18:42:43.409 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:42:43.925 Jan 25 18:42:43.925: INFO: starting to wait for deployment to become available Jan 25 18:43:56.310: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m12.384967619s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:43:57.247 Jan 25 18:43:57.247: INFO: starting to wait for deployment to become available Jan 25 18:43:57.350: INFO: Deployment calico-system/calico-typha is now available, took 103.324026ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/25/23 18:43:57.35 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:43:58.103 Jan 25 18:43:58.103: INFO: starting to wait for deployment to become available Jan 25 18:44:48.758: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 50.655330151s INFO: Waiting for the first control plane machine managed by capz-e2e-7dm7pj/capz-e2e-7dm7pj-gpu-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/25/23 18:44:48.799 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 01/25/23 18:44:48.81 Jan 25 18:44:48.950: INFO: getting history for release azuredisk-csi-driver-oot Jan 25 18:44:49.063: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 25 18:44:53.917: INFO: creating 1 resource(s) Jan 25 18:44:54.377: INFO: creating 18 resource(s) Jan 25 18:44:55.248: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 01/25/23 18:44:55.301 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:44:55.73 Jan 25 18:44:55.730: INFO: starting to wait for deployment to become available Jan 25 18:45:36.366: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.636269352s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-7dm7pj/capz-e2e-7dm7pj-gpu-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 01/25/23 18:45:36.4 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 01/25/23 18:45:36.412 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/25/23 18:45:36.476 STEP: Checking all the machines controlled by capz-e2e-7dm7pj-gpu-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 18:45:36.5 INFO: Waiting for the machine pools to be provisioned INFO: Calling PostMachinesProvisioned STEP: Waiting for all DaemonSet Pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/daemonsets.go:71 @ 01/25/23 18:45:36.71 STEP: waiting for 2 daemonset calico-system/calico-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:45:37.341 Jan 25 18:45:37.341: INFO: 2 daemonset calico-system/calico-node pods are running, took 104.562822ms STEP: waiting for 2 daemonset calico-system/csi-node-driver pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:45:37.447 Jan 25 18:45:37.447: INFO: 2 daemonset calico-system/csi-node-driver pods are running, took 104.958731ms STEP: waiting for 2 daemonset gpu-operator-resources/gpu-operator-node-feature-discovery-worker pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:45:37.557 Jan 25 18:45:37.557: INFO: 2 daemonset gpu-operator-resources/gpu-operator-node-feature-discovery-worker pods are running, took 105.033791ms STEP: waiting for 2 daemonset kube-system/csi-azuredisk-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:45:37.696 Jan 25 18:45:37.696: INFO: 2 daemonset kube-system/csi-azuredisk-node pods are running, took 135.767136ms STEP: daemonset kube-system/csi-azuredisk-node-win has no schedulable nodes, will skip - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:45:37.802 STEP: waiting for 2 daemonset kube-system/kube-proxy pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:45:37.908 Jan 25 18:45:37.908: INFO: 2 daemonset kube-system/kube-proxy pods are running, took 103.698073ms STEP: Verifying expected VM extensions are present on the node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:544 @ 01/25/23 18:45:37.908 STEP: creating a Kubernetes client to the workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:62 @ 01/25/23 18:45:37.908 STEP: Retrieving all machines from the machine template spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:68 @ 01/25/23 18:45:37.984 Jan 25 18:45:37.984: INFO: Listing machines in namespace capz-e2e-7dm7pj with label cluster.x-k8s.io/cluster-name=capz-e2e-7dm7pj-gpu STEP: Creating a mapping of machine IDs to array of expected VM extensions - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:83 @ 01/25/23 18:45:37.995 STEP: Creating a VM and VM extension client - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:91 @ 01/25/23 18:45:37.995 STEP: Verifying specified VM extensions are created on Azure - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:108 @ 01/25/23 18:45:38.857 STEP: Retrieving all machine pools from the machine template spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:123 @ 01/25/23 18:45:39.688 Jan 25 18:45:39.688: INFO: Listing machine pools in namespace capz-e2e-7dm7pj with label cluster.x-k8s.io/cluster-name=capz-e2e-7dm7pj-gpu END STEP: Verifying expected VM extensions are present on the node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:544 @ 01/25/23 18:45:39.697 (1.789s) STEP: Running a GPU-based calculation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:554 @ 01/25/23 18:45:39.697 STEP: creating a Kubernetes client to the workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:62 @ 01/25/23 18:45:39.697 STEP: Waiting for a node to have an "nvidia.com/gpu" allocatable resource - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:68 @ 01/25/23 18:45:39.738 END STEP: Running a GPU-based calculation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:554 @ 01/25/23 19:10:40.209 (25m0.512s) [FAILED] Timed out after 1500.000s. Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-pbptg: I0125 18:43:43.601537 1 nfd-master.go:170] Node Feature Discovery Master v0.10.1 I0125 18:43:43.601610 1 nfd-master.go:174] NodeName: "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:43:43.601616 1 nfd-master.go:185] starting nfd LabelRule controller I0125 18:43:43.629745 1 nfd-master.go:226] gRPC server serving on port: 8080 I0125 18:44:32.243729 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:45:10.819049 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:45:32.307830 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:46:10.910243 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:46:32.341801 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:47:10.935507 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:47:32.378791 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:48:10.961269 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:48:32.415628 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:49:10.994796 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:49:32.444874 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:50:11.020573 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:50:32.474692 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:51:11.048646 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:51:32.515715 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:52:11.082233 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:52:32.542640 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:53:11.107359 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:53:32.570066 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:54:11.138777 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:54:32.602384 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:55:11.173527 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:55:32.624709 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:56:11.215965 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:56:32.661727 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:57:11.252342 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:57:32.698637 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:58:11.278769 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:58:32.739572 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 18:59:11.306207 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 18:59:32.767982 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:00:11.333050 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:00:32.794502 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:01:11.358959 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:01:32.829342 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:02:11.388332 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:02:32.857402 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:03:11.417961 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:03:32.881142 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:04:11.445727 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:04:32.911399 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:05:11.473287 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:05:32.938115 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:06:11.500694 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:06:32.970895 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:07:11.526380 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:07:32.998628 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:08:11.551182 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:08:33.027091 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:09:11.578081 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:09:33.053734 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" I0125 19:10:11.604491 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-md-0-xmfh7" I0125 19:10:33.087624 1 nfd-master.go:423] received labeling request for node "capz-e2e-7dm7pj-gpu-control-plane-89lrz" Logs for pod gpu-operator-node-feature-discovery-worker-dmfl7: I0125 18:44:32.091655 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 18:44:32.091894 1 nfd-worker.go:156] NodeName: 'capz-e2e-7dm7pj-gpu-control-plane-89lrz' I0125 18:44:32.093969 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 18:44:32.094209 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 18:44:32.096564 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 18:44:32.100058 1 component.go:36] [core]parsed scheme: "" I0125 18:44:32.100129 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 18:44:32.100362 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 18:44:32.100388 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 18:44:32.100394 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 18:44:32.100506 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 18:44:32.101457 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 18:44:32.150550 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 18:44:32.161849 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 18:44:32.162023 1 component.go:36] [core]Channel Connectivity change to READY I0125 18:44:32.194107 1 nfd-worker.go:472] starting feature discovery... I0125 18:44:32.195155 1 nfd-worker.go:484] feature discovery completed I0125 18:44:32.195304 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:45:32.293723 1 nfd-worker.go:472] starting feature discovery... I0125 18:45:32.294912 1 nfd-worker.go:484] feature discovery completed I0125 18:45:32.294934 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:46:32.330486 1 nfd-worker.go:472] starting feature discovery... I0125 18:46:32.330828 1 nfd-worker.go:484] feature discovery completed I0125 18:46:32.330847 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:47:32.364667 1 nfd-worker.go:472] starting feature discovery... I0125 18:47:32.364910 1 nfd-worker.go:484] feature discovery completed I0125 18:47:32.364927 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:48:32.403443 1 nfd-worker.go:472] starting feature discovery... I0125 18:48:32.403835 1 nfd-worker.go:484] feature discovery completed I0125 18:48:32.403854 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:49:32.435429 1 nfd-worker.go:472] starting feature discovery... I0125 18:49:32.435607 1 nfd-worker.go:484] feature discovery completed I0125 18:49:32.435647 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:50:32.460558 1 nfd-worker.go:472] starting feature discovery... I0125 18:50:32.460743 1 nfd-worker.go:484] feature discovery completed I0125 18:50:32.460833 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:51:32.504877 1 nfd-worker.go:472] starting feature discovery... I0125 18:51:32.505110 1 nfd-worker.go:484] feature discovery completed I0125 18:51:32.505135 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:52:32.530675 1 nfd-worker.go:472] starting feature discovery... I0125 18:52:32.530879 1 nfd-worker.go:484] feature discovery completed I0125 18:52:32.530897 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:53:32.558805 1 nfd-worker.go:472] starting feature discovery... I0125 18:53:32.559169 1 nfd-worker.go:484] feature discovery completed I0125 18:53:32.559188 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:54:32.588328 1 nfd-worker.go:472] starting feature discovery... I0125 18:54:32.588668 1 nfd-worker.go:484] feature discovery completed I0125 18:54:32.588686 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:55:32.615763 1 nfd-worker.go:472] starting feature discovery... I0125 18:55:32.616033 1 nfd-worker.go:484] feature discovery completed I0125 18:55:32.616050 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:56:32.648709 1 nfd-worker.go:472] starting feature discovery... I0125 18:56:32.649092 1 nfd-worker.go:484] feature discovery completed I0125 18:56:32.649198 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:57:32.687160 1 nfd-worker.go:472] starting feature discovery... I0125 18:57:32.687321 1 nfd-worker.go:484] feature discovery completed I0125 18:57:32.687339 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:58:32.727079 1 nfd-worker.go:472] starting feature discovery... I0125 18:58:32.727351 1 nfd-worker.go:484] feature discovery completed I0125 18:58:32.727369 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:59:32.756691 1 nfd-worker.go:472] starting feature discovery... I0125 18:59:32.756993 1 nfd-worker.go:484] feature discovery completed I0125 18:59:32.757010 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:00:32.781914 1 nfd-worker.go:472] starting feature discovery... I0125 19:00:32.782340 1 nfd-worker.go:484] feature discovery completed I0125 19:00:32.782358 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:01:32.817440 1 nfd-worker.go:472] starting feature discovery... I0125 19:01:32.817714 1 nfd-worker.go:484] feature discovery completed I0125 19:01:32.817733 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:02:32.844879 1 nfd-worker.go:472] starting feature discovery... I0125 19:02:32.845273 1 nfd-worker.go:484] feature discovery completed I0125 19:02:32.845291 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:03:32.870224 1 nfd-worker.go:472] starting feature discovery... I0125 19:03:32.870542 1 nfd-worker.go:484] feature discovery completed I0125 19:03:32.870560 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:04:32.899315 1 nfd-worker.go:472] starting feature discovery... I0125 19:04:32.899624 1 nfd-worker.go:484] feature discovery completed I0125 19:04:32.899641 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:05:32.925766 1 nfd-worker.go:472] starting feature discovery... I0125 19:05:32.926101 1 nfd-worker.go:484] feature discovery completed I0125 19:05:32.926118 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:06:32.956827 1 nfd-worker.go:472] starting feature discovery... I0125 19:06:32.957164 1 nfd-worker.go:484] feature discovery completed I0125 19:06:32.957204 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:07:32.986407 1 nfd-worker.go:472] starting feature discovery... I0125 19:07:32.986669 1 nfd-worker.go:484] feature discovery completed I0125 19:07:32.986731 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:08:33.014442 1 nfd-worker.go:472] starting feature discovery... I0125 19:08:33.014683 1 nfd-worker.go:484] feature discovery completed I0125 19:08:33.014701 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:09:33.042697 1 nfd-worker.go:472] starting feature discovery... I0125 19:09:33.042984 1 nfd-worker.go:484] feature discovery completed I0125 19:09:33.043003 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:10:33.071082 1 nfd-worker.go:472] starting feature discovery... I0125 19:10:33.071284 1 nfd-worker.go:484] feature discovery completed I0125 19:10:33.071306 1 nfd-worker.go:565] sending labeling request to nfd-master Logs for pod gpu-operator-node-feature-discovery-worker-q54vc: I0125 18:45:10.781355 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 18:45:10.781425 1 nfd-worker.go:156] NodeName: 'capz-e2e-7dm7pj-gpu-md-0-xmfh7' I0125 18:45:10.781918 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 18:45:10.781992 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 18:45:10.782035 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 18:45:10.782075 1 component.go:36] [core]parsed scheme: "" I0125 18:45:10.782082 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 18:45:10.782097 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 18:45:10.782109 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 18:45:10.782113 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 18:45:10.782145 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 18:45:10.782172 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 18:45:10.782222 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 18:45:10.788387 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 18:45:10.788403 1 component.go:36] [core]Channel Connectivity change to READY I0125 18:45:10.797089 1 nfd-worker.go:472] starting feature discovery... I0125 18:45:10.797199 1 nfd-worker.go:484] feature discovery completed I0125 18:45:10.797210 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:46:10.896844 1 nfd-worker.go:472] starting feature discovery... I0125 18:46:10.896956 1 nfd-worker.go:484] feature discovery completed I0125 18:46:10.896969 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:47:10.926423 1 nfd-worker.go:472] starting feature discovery... I0125 18:47:10.926535 1 nfd-worker.go:484] feature discovery completed I0125 18:47:10.926547 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:48:10.951031 1 nfd-worker.go:472] starting feature discovery... I0125 18:48:10.951186 1 nfd-worker.go:484] feature discovery completed I0125 18:48:10.951200 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:49:10.981066 1 nfd-worker.go:472] starting feature discovery... I0125 18:49:10.981178 1 nfd-worker.go:484] feature discovery completed I0125 18:49:10.981191 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:50:11.010870 1 nfd-worker.go:472] starting feature discovery... I0125 18:50:11.010984 1 nfd-worker.go:484] feature discovery completed I0125 18:50:11.010997 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:51:11.038035 1 nfd-worker.go:472] starting feature discovery... I0125 18:51:11.038147 1 nfd-worker.go:484] feature discovery completed I0125 18:51:11.038159 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:52:11.071678 1 nfd-worker.go:472] starting feature discovery... I0125 18:52:11.071793 1 nfd-worker.go:484] feature discovery completed I0125 18:52:11.071806 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:53:11.097633 1 nfd-worker.go:472] starting feature discovery... I0125 18:53:11.097783 1 nfd-worker.go:484] feature discovery completed I0125 18:53:11.097799 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:54:11.128161 1 nfd-worker.go:472] starting feature discovery... I0125 18:54:11.128274 1 nfd-worker.go:484] feature discovery completed I0125 18:54:11.128286 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:55:11.164265 1 nfd-worker.go:472] starting feature discovery... I0125 18:55:11.164379 1 nfd-worker.go:484] feature discovery completed I0125 18:55:11.164393 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:56:11.206100 1 nfd-worker.go:472] starting feature discovery... I0125 18:56:11.206213 1 nfd-worker.go:484] feature discovery completed I0125 18:56:11.206226 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:57:11.241934 1 nfd-worker.go:472] starting feature discovery... I0125 18:57:11.242050 1 nfd-worker.go:484] feature discovery completed I0125 18:57:11.242063 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:58:11.269255 1 nfd-worker.go:472] starting feature discovery... I0125 18:58:11.269369 1 nfd-worker.go:484] feature discovery completed I0125 18:58:11.269382 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 18:59:11.296808 1 nfd-worker.go:472] starting feature discovery... I0125 18:59:11.296934 1 nfd-worker.go:484] feature discovery completed I0125 18:59:11.296947 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:00:11.322626 1 nfd-worker.go:472] starting feature discovery... I0125 19:00:11.322771 1 nfd-worker.go:484] feature discovery completed I0125 19:00:11.322785 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:01:11.349300 1 nfd-worker.go:472] starting feature discovery... I0125 19:01:11.349411 1 nfd-worker.go:484] feature discovery completed I0125 19:01:11.349424 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:02:11.378075 1 nfd-worker.go:472] starting feature discovery... I0125 19:02:11.378190 1 nfd-worker.go:484] feature discovery completed I0125 19:02:11.378203 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:03:11.407624 1 nfd-worker.go:472] starting feature discovery... I0125 19:03:11.407733 1 nfd-worker.go:484] feature discovery completed I0125 19:03:11.407746 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:04:11.435600 1 nfd-worker.go:472] starting feature discovery... I0125 19:04:11.435714 1 nfd-worker.go:484] feature discovery completed I0125 19:04:11.435726 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:05:11.463069 1 nfd-worker.go:472] starting feature discovery... I0125 19:05:11.463185 1 nfd-worker.go:484] feature discovery completed I0125 19:05:11.463199 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:06:11.491086 1 nfd-worker.go:472] starting feature discovery... I0125 19:06:11.491198 1 nfd-worker.go:484] feature discovery completed I0125 19:06:11.491210 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:07:11.515751 1 nfd-worker.go:472] starting feature discovery... I0125 19:07:11.515866 1 nfd-worker.go:484] feature discovery completed I0125 19:07:11.515878 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:08:11.541236 1 nfd-worker.go:472] starting feature discovery... I0125 19:08:11.541348 1 nfd-worker.go:484] feature discovery completed I0125 19:08:11.541360 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:09:11.568311 1 nfd-worker.go:472] starting feature discovery... I0125 19:09:11.568421 1 nfd-worker.go:484] feature discovery completed I0125 19:09:11.568434 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 19:10:11.594320 1 nfd-worker.go:472] starting feature discovery... I0125 19:10:11.594430 1 nfd-worker.go:484] feature discovery completed I0125 19:10:11.594443 1 nfd-worker.go:565] sending labeling request to nfd-master Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/25/23 19:10:40.209 < Exit [It] with a single control plane node and 1 node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506 @ 01/25/23 19:10:40.209 (32m27.574s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/25/23 19:10:40.209 Jan 25 19:10:40.209: INFO: FAILED! Jan 25 19:10:40.209: INFO: Cleaning up after "Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node" spec STEP: Dumping logs from the "capz-e2e-7dm7pj-gpu" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 19:10:40.209 Jan 25 19:10:40.209: INFO: Dumping workload cluster capz-e2e-7dm7pj/capz-e2e-7dm7pj-gpu logs Jan 25 19:10:40.261: INFO: Collecting logs for Linux node capz-e2e-7dm7pj-gpu-control-plane-89lrz in cluster capz-e2e-7dm7pj-gpu in namespace capz-e2e-7dm7pj Jan 25 19:11:01.566: INFO: Collecting boot logs for AzureMachine capz-e2e-7dm7pj-gpu-control-plane-89lrz Jan 25 19:11:03.432: INFO: Collecting logs for Linux node capz-e2e-7dm7pj-gpu-md-0-xmfh7 in cluster capz-e2e-7dm7pj-gpu in namespace capz-e2e-7dm7pj Jan 25 19:11:13.610: INFO: Collecting boot logs for AzureMachine capz-e2e-7dm7pj-gpu-md-0-xmfh7 Jan 25 19:11:14.733: INFO: Dumping workload cluster capz-e2e-7dm7pj/capz-e2e-7dm7pj-gpu kube-system pod logs Jan 25 19:11:15.481: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-6f6b4965c-24xs4 Jan 25 19:11:15.481: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-6f6b4965c-wjwbm, container calico-apiserver Jan 25 19:11:15.481: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-6f6b4965c-24xs4, container calico-apiserver Jan 25 19:11:15.481: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-6f6b4965c-wjwbm Jan 25 19:11:15.600: INFO: Collecting events for Pod calico-system/calico-kube-controllers-5f9dc85578-5cdr5 Jan 25 19:11:15.600: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-5f9dc85578-5cdr5, container calico-kube-controllers Jan 25 19:11:15.601: INFO: Collecting events for Pod calico-system/calico-typha-7c5554d4b4-vtx9l Jan 25 19:11:15.601: INFO: Creating log watcher for controller calico-system/calico-node-dmgzl, container calico-node Jan 25 19:11:15.602: INFO: Creating log watcher for controller calico-system/csi-node-driver-cp8fw, container calico-csi Jan 25 19:11:15.602: INFO: Collecting events for Pod calico-system/calico-node-dmgzl Jan 25 19:11:15.602: INFO: Creating log watcher for controller calico-system/calico-node-rgtbj, container calico-node Jan 25 19:11:15.603: INFO: Collecting events for Pod calico-system/calico-node-rgtbj Jan 25 19:11:15.603: INFO: Creating log watcher for controller calico-system/csi-node-driver-cp8fw, container csi-node-driver-registrar Jan 25 19:11:15.603: INFO: Creating log watcher for controller calico-system/calico-typha-7c5554d4b4-vtx9l, container calico-typha Jan 25 19:11:15.603: INFO: Creating log watcher for controller calico-system/csi-node-driver-h9j65, container calico-csi Jan 25 19:11:15.603: INFO: Collecting events for Pod calico-system/csi-node-driver-cp8fw Jan 25 19:11:15.603: INFO: Creating log watcher for controller calico-system/csi-node-driver-h9j65, container csi-node-driver-registrar Jan 25 19:11:15.604: INFO: Collecting events for Pod calico-system/csi-node-driver-h9j65 Jan 25 19:11:15.711: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-master-77bc558fdc-pbptg Jan 25 19:11:15.711: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-worker-dmfl7 Jan 25 19:11:15.712: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-worker-q54vc, container worker Jan 25 19:11:15.712: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-worker-dmfl7, container worker Jan 25 19:11:15.712: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-bcf6cd75d-zzf8h Jan 25 19:11:15.712: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-master-77bc558fdc-pbptg, container master Jan 25 19:11:15.712: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-bcf6cd75d-zzf8h, container gpu-operator Jan 25 19:11:15.712: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-worker-q54vc Jan 25 19:11:15.848: INFO: Collecting events for Pod kube-system/coredns-565d847f94-mkmzc Jan 25 19:11:15.848: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-xjfhd, container coredns Jan 25 19:11:15.848: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-mkmzc, container coredns Jan 25 19:11:15.849: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-897w8 Jan 25 19:11:15.849: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-7dm7pj-gpu-control-plane-89lrz Jan 25 19:11:15.849: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-2nj7d, container liveness-probe Jan 25 19:11:15.849: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-kbljz, container liveness-probe Jan 25 19:11:15.849: INFO: Collecting events for Pod kube-system/kube-proxy-lwxdw Jan 25 19:11:15.849: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-7dm7pj-gpu-control-plane-89lrz, container kube-controller-manager Jan 25 19:11:15.850: INFO: Creating log watcher for controller kube-system/kube-proxy-vc2nb, container kube-proxy Jan 25 19:11:15.851: INFO: Collecting events for Pod kube-system/coredns-565d847f94-xjfhd Jan 25 19:11:15.851: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-kbljz, container node-driver-registrar Jan 25 19:11:15.851: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-2nj7d, container csi-provisioner Jan 25 19:11:15.851: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-7dm7pj-gpu-control-plane-89lrz, container etcd Jan 25 19:11:15.851: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-7dm7pj-gpu-control-plane-89lrz Jan 25 19:11:15.851: INFO: Creating log watcher for controller kube-system/kube-proxy-lwxdw, container kube-proxy Jan 25 19:11:15.852: INFO: Collecting events for Pod kube-system/kube-proxy-vc2nb Jan 25 19:11:15.852: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-7dm7pj-gpu-control-plane-89lrz, container kube-scheduler Jan 25 19:11:15.852: INFO: Collecting events for Pod kube-system/etcd-capz-e2e-7dm7pj-gpu-control-plane-89lrz Jan 25 19:11:15.853: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-7dm7pj-gpu-control-plane-89lrz, container kube-apiserver Jan 25 19:11:15.853: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-kbljz, container azuredisk Jan 25 19:11:15.853: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-2nj7d, container csi-snapshotter Jan 25 19:11:15.853: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-2nj7d, container azuredisk Jan 25 19:11:15.853: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-kbljz Jan 25 19:11:15.853: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-2nj7d, container csi-attacher Jan 25 19:11:15.853: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-2nj7d, container csi-resizer Jan 25 19:11:15.854: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-6dbd9768d6-2nj7d Jan 25 19:11:15.854: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-897w8, container node-driver-registrar Jan 25 19:11:15.854: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-7dm7pj-gpu-control-plane-89lrz Jan 25 19:11:15.854: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-897w8, container azuredisk Jan 25 19:11:15.854: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-897w8, container liveness-probe Jan 25 19:11:15.958: INFO: Fetching kube-system pod logs took 1.224976393s Jan 25 19:11:15.958: INFO: Dumping workload cluster capz-e2e-7dm7pj/capz-e2e-7dm7pj-gpu Azure activity log Jan 25 19:11:15.958: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-vhhgk, container tigera-operator Jan 25 19:11:15.958: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-vhhgk Jan 25 19:11:20.720: INFO: Fetching activity logs took 4.761940023s Jan 25 19:11:20.720: INFO: Dumping all the Cluster API resources in the "capz-e2e-7dm7pj" namespace Jan 25 19:11:21.157: INFO: Deleting all clusters in the capz-e2e-7dm7pj namespace STEP: Deleting cluster capz-e2e-7dm7pj-gpu - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 19:11:21.186 INFO: Waiting for the Cluster capz-e2e-7dm7pj/capz-e2e-7dm7pj-gpu to be deleted STEP: Waiting for cluster capz-e2e-7dm7pj-gpu to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 19:11:21.197 Jan 25 19:16:11.379: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-7dm7pj Jan 25 19:16:11.410: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/25/23 19:16:12.034 INFO: "with a single control plane node and 1 node" started at Wed, 25 Jan 2023 19:18:40 UTC on Ginkgo node 6 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/25/23 19:18:40.787 (8m0.577s)
Find gpu-operator-node-feature-discovery-master-77bc558fdc-pbptg mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sclusters\susing\sclusterclass\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\,\sone\slinux\sworker\snode\,\sand\sone\swindows\sworker\snode$'
[FAILED] Timed out after 1500.001s. Timed out waiting for 1 nodes to be created for MachineDeployment capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc-md-win-fhcw7 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 01/25/23 19:09:37.124from junit.e2e_suite.1.xml
2023/01/25 18:38:12 failed trying to get namespace (capz-e2e-y7t1gk):namespaces "capz-e2e-y7t1gk" not found clusterclass.cluster.x-k8s.io/ci-default created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/ci-default-kubeadm-control-plane created azureclustertemplate.infrastructure.cluster.x-k8s.io/ci-default-azure-cluster created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-control-plane created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker-win created azuremachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker-win created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created cluster.cluster.x-k8s.io/capz-e2e-y7t1gk-cc created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-y7t1gk-cc-calico created clusterresourceset.addons.cluster.x-k8s.io/csi-proxy created configmap/cni-capz-e2e-y7t1gk-cc-calico-windows created configmap/csi-proxy-addon created felixconfiguration.crd.projectcalico.org/default configured Failed to get logs for Machine capz-e2e-y7t1gk-cc-dxqmc-8gnmb, Cluster capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc: dialing public load balancer at capz-e2e-y7t1gk-cc-70df49c1.westus3.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-y7t1gk-cc-md-0-nxnlq-5f7cbb64c8-xbxq9, Cluster capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc: dialing public load balancer at capz-e2e-y7t1gk-cc-70df49c1.westus3.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for Machine capz-e2e-y7t1gk-cc-md-win-fhcw7-7565476557-p4ml4, Cluster capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc: [dialing public load balancer at capz-e2e-y7t1gk-cc-70df49c1.westus3.cloudapp.azure.com: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain, Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/25/23 18:38:12.447 INFO: "" started at Wed, 25 Jan 2023 18:38:12 UTC on Ginkgo node 3 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-y7t1gk" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:38:12.447 Jan 25 18:38:12.447: INFO: starting to create namespace for hosting the "capz-e2e-y7t1gk" test spec INFO: Creating namespace capz-e2e-y7t1gk INFO: Creating event watcher for namespace "capz-e2e-y7t1gk" Jan 25 18:38:12.605: INFO: Using existing cluster identity secret < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/25/23 18:38:12.605 (159ms) > Enter [It] with a single control plane node, one linux worker node, and one windows worker node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:906 @ 01/25/23 18:38:12.605 INFO: Cluster name is capz-e2e-y7t1gk-cc INFO: Creating the workload cluster with name "capz-e2e-y7t1gk-cc" using the "topology" template (Kubernetes v1.25.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-y7t1gk-cc --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor topology INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/25/23 18:38:19.27 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/25/23 18:40:19.443 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/25/23 18:40:19.443 Jan 25 18:42:19.695: INFO: getting history for release projectcalico Jan 25 18:42:19.759: INFO: Release projectcalico does not exist, installing it Jan 25 18:42:20.668: INFO: creating 1 resource(s) Jan 25 18:42:20.762: INFO: creating 1 resource(s) Jan 25 18:42:20.848: INFO: creating 1 resource(s) Jan 25 18:42:20.929: INFO: creating 1 resource(s) Jan 25 18:42:21.013: INFO: creating 1 resource(s) Jan 25 18:42:21.093: INFO: creating 1 resource(s) Jan 25 18:42:21.288: INFO: creating 1 resource(s) Jan 25 18:42:21.401: INFO: creating 1 resource(s) Jan 25 18:42:21.474: INFO: creating 1 resource(s) Jan 25 18:42:21.550: INFO: creating 1 resource(s) Jan 25 18:42:21.629: INFO: creating 1 resource(s) Jan 25 18:42:21.699: INFO: creating 1 resource(s) Jan 25 18:42:21.772: INFO: creating 1 resource(s) Jan 25 18:42:21.883: INFO: creating 1 resource(s) Jan 25 18:42:21.959: INFO: creating 1 resource(s) Jan 25 18:42:22.044: INFO: creating 1 resource(s) Jan 25 18:42:22.146: INFO: creating 1 resource(s) Jan 25 18:42:22.232: INFO: creating 1 resource(s) Jan 25 18:42:22.335: INFO: creating 1 resource(s) Jan 25 18:42:22.494: INFO: creating 1 resource(s) Jan 25 18:42:22.856: INFO: creating 1 resource(s) Jan 25 18:42:22.934: INFO: Clearing discovery cache Jan 25 18:42:22.934: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 25 18:42:26.433: INFO: creating 1 resource(s) Jan 25 18:42:27.040: INFO: creating 6 resource(s) Jan 25 18:42:27.949: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/25/23 18:42:28.431 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:42:28.686 Jan 25 18:42:28.686: INFO: starting to wait for deployment to become available Jan 25 18:42:38.811: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.124484752s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/25/23 18:42:39.628 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:42:39.94 Jan 25 18:42:39.940: INFO: starting to wait for deployment to become available Jan 25 18:43:41.305: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m1.364714427s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:43:41.858 Jan 25 18:43:41.858: INFO: starting to wait for deployment to become available Jan 25 18:43:41.919: INFO: Deployment calico-system/calico-typha is now available, took 61.442909ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/25/23 18:43:41.919 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:43:42.357 Jan 25 18:43:42.357: INFO: starting to wait for deployment to become available Jan 25 18:43:52.483: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 10.125433308s INFO: Waiting for the first control plane machine managed by capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc-dxqmc to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/25/23 18:43:52.535 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 01/25/23 18:43:52.551 Jan 25 18:43:52.663: INFO: getting history for release azuredisk-csi-driver-oot Jan 25 18:43:52.728: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 25 18:43:55.582: INFO: creating 1 resource(s) Jan 25 18:43:55.811: INFO: creating 18 resource(s) Jan 25 18:43:56.342: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 01/25/23 18:43:56.365 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 18:43:56.623 Jan 25 18:43:56.623: INFO: starting to wait for deployment to become available Jan 25 18:44:36.961: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.337993183s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc-dxqmc to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 01/25/23 18:44:36.992 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 01/25/23 18:44:37.005 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/25/23 18:44:37.073 STEP: Checking all the machines controlled by capz-e2e-y7t1gk-cc-md-0-nxnlq are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 18:44:37.097 STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/25/23 18:44:37.121 [FAILED] Timed out after 1500.001s. Timed out waiting for 1 nodes to be created for MachineDeployment capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc-md-win-fhcw7 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:131 @ 01/25/23 19:09:37.124 < Exit [It] with a single control plane node, one linux worker node, and one windows worker node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:906 @ 01/25/23 19:09:37.124 (31m24.519s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/25/23 19:09:37.124 Jan 25 19:09:37.124: INFO: FAILED! Jan 25 19:09:37.124: INFO: Cleaning up after "Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node" spec STEP: Dumping logs from the "capz-e2e-y7t1gk-cc" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 19:09:37.124 Jan 25 19:09:37.124: INFO: Dumping workload cluster capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc logs Jan 25 19:09:37.257: INFO: Collecting logs for Linux node capz-e2e-y7t1gk-cc-control-plane-x4j5q-79l2d in cluster capz-e2e-y7t1gk-cc in namespace capz-e2e-y7t1gk Jan 25 19:10:38.770: INFO: Collecting boot logs for AzureMachine capz-e2e-y7t1gk-cc-control-plane-x4j5q-79l2d Jan 25 19:10:40.089: INFO: Collecting logs for Linux node capz-e2e-y7t1gk-cc-md-0-infra-69vh4-ql82k in cluster capz-e2e-y7t1gk-cc in namespace capz-e2e-y7t1gk Jan 25 19:11:41.589: INFO: Collecting boot logs for AzureMachine capz-e2e-y7t1gk-cc-md-0-infra-69vh4-ql82k Jan 25 19:11:42.357: INFO: Unable to collect logs as node doesn't have addresses Jan 25 19:11:42.357: INFO: Collecting logs for Windows node capz-e2e-y7t1gk-cc-md-win-infra-jmznw-rnp2r in cluster capz-e2e-y7t1gk-cc in namespace capz-e2e-y7t1gk Jan 25 19:15:48.294: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-e2e-y7t1gk-cc-md-win-infra-jmznw-rnp2r to /logs/artifacts/clusters/capz-e2e-y7t1gk-cc/machines/capz-e2e-y7t1gk-cc-md-win-fhcw7-7565476557-p4ml4/crashdumps.tar Jan 25 19:15:48.785: INFO: Collecting boot logs for AzureMachine capz-e2e-y7t1gk-cc-md-win-infra-jmznw-rnp2r Jan 25 19:15:48.810: INFO: Dumping workload cluster capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc kube-system pod logs Jan 25 19:15:49.455: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-76d4444d96-8xcsh Jan 25 19:15:49.455: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-76d4444d96-95qlt, container calico-apiserver Jan 25 19:15:49.455: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-76d4444d96-95qlt Jan 25 19:15:49.455: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-76d4444d96-8xcsh, container calico-apiserver Jan 25 19:15:49.606: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-5f9dc85578-4kx7v, container calico-kube-controllers Jan 25 19:15:49.606: INFO: Creating log watcher for controller calico-system/calico-typha-fbc64b79c-bv7jn, container calico-typha Jan 25 19:15:49.607: INFO: Collecting events for Pod calico-system/calico-node-p4k6b Jan 25 19:15:49.607: INFO: Collecting events for Pod calico-system/calico-kube-controllers-5f9dc85578-4kx7v Jan 25 19:15:49.607: INFO: Creating log watcher for controller calico-system/csi-node-driver-6vbnm, container csi-node-driver-registrar Jan 25 19:15:49.607: INFO: Creating log watcher for controller calico-system/calico-node-p4k6b, container calico-node Jan 25 19:15:49.608: INFO: Collecting events for Pod calico-system/calico-typha-fbc64b79c-bv7jn Jan 25 19:15:49.608: INFO: Creating log watcher for controller calico-system/calico-node-snp64, container calico-node Jan 25 19:15:49.608: INFO: Collecting events for Pod calico-system/calico-node-snp64 Jan 25 19:15:49.608: INFO: Creating log watcher for controller calico-system/csi-node-driver-6vbnm, container calico-csi Jan 25 19:15:49.608: INFO: Collecting events for Pod calico-system/csi-node-driver-6vbnm Jan 25 19:15:49.608: INFO: Creating log watcher for controller calico-system/csi-node-driver-88wm2, container csi-node-driver-registrar Jan 25 19:15:49.608: INFO: Creating log watcher for controller calico-system/csi-node-driver-88wm2, container calico-csi Jan 25 19:15:49.608: INFO: Collecting events for Pod calico-system/csi-node-driver-88wm2 Jan 25 19:15:49.713: INFO: Collecting events for Pod kube-system/coredns-565d847f94-529z7 Jan 25 19:15:49.713: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-6dbd9768d6-4rgj2 Jan 25 19:15:49.713: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-4rgj2, container csi-attacher Jan 25 19:15:49.713: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-529z7, container coredns Jan 25 19:15:49.714: INFO: Collecting events for Pod kube-system/coredns-565d847f94-6rxm8 Jan 25 19:15:49.714: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-4rgj2, container csi-snapshotter Jan 25 19:15:49.714: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-6rxm8, container coredns Jan 25 19:15:49.715: INFO: Collecting events for Pod kube-system/etcd-capz-e2e-y7t1gk-cc-control-plane-x4j5q-79l2d Jan 25 19:15:49.715: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-4rgj2, container csi-provisioner Jan 25 19:15:49.715: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sx7bt, container liveness-probe Jan 25 19:15:49.715: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-y7t1gk-cc-control-plane-x4j5q-79l2d, container kube-apiserver Jan 25 19:15:49.715: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-4rgj2, container csi-resizer Jan 25 19:15:49.715: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-4rgj2, container liveness-probe Jan 25 19:15:49.716: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sx7bt, container node-driver-registrar Jan 25 19:15:49.716: INFO: Collecting events for Pod kube-system/kube-proxy-2gnhg Jan 25 19:15:49.716: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-y7t1gk-cc-control-plane-x4j5q-79l2d Jan 25 19:15:49.716: INFO: Creating log watcher for controller kube-system/kube-proxy-8x2pv, container kube-proxy Jan 25 19:15:49.716: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-y7t1gk-cc-control-plane-x4j5q-79l2d, container kube-controller-manager Jan 25 19:15:49.717: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-4rgj2, container azuredisk Jan 25 19:15:49.717: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sx7bt, container azuredisk Jan 25 19:15:49.717: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-t7f9k, container node-driver-registrar Jan 25 19:15:49.717: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-t7f9k, container azuredisk Jan 25 19:15:49.718: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-sx7bt Jan 25 19:15:49.718: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-y7t1gk-cc-control-plane-x4j5q-79l2d, container etcd Jan 25 19:15:49.718: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-t7f9k Jan 25 19:15:49.718: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-t7f9k, container liveness-probe Jan 25 19:15:49.719: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-y7t1gk-cc-control-plane-x4j5q-79l2d Jan 25 19:15:49.719: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-y7t1gk-cc-control-plane-x4j5q-79l2d, container kube-scheduler Jan 25 19:15:49.719: INFO: Creating log watcher for controller kube-system/kube-proxy-2gnhg, container kube-proxy Jan 25 19:15:49.719: INFO: Collecting events for Pod kube-system/kube-proxy-8x2pv Jan 25 19:15:49.720: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-y7t1gk-cc-control-plane-x4j5q-79l2d Jan 25 19:15:49.826: INFO: Fetching kube-system pod logs took 1.016140484s Jan 25 19:15:49.826: INFO: Dumping workload cluster capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc Azure activity log Jan 25 19:15:49.826: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-d7gmh, container tigera-operator Jan 25 19:15:49.827: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-d7gmh Jan 25 19:15:49.851: INFO: Error fetching activity logs for cluster capz-e2e-y7t1gk-cc in namespace capz-e2e-y7t1gk. Not able to find the AzureManagedControlPlane on the management cluster: azuremanagedcontrolplanes.infrastructure.cluster.x-k8s.io "capz-e2e-y7t1gk-cc" not found Jan 25 19:15:49.851: INFO: Fetching activity logs took 25.554539ms Jan 25 19:15:49.851: INFO: Dumping all the Cluster API resources in the "capz-e2e-y7t1gk" namespace Jan 25 19:15:50.299: INFO: Deleting all clusters in the capz-e2e-y7t1gk namespace STEP: Deleting cluster capz-e2e-y7t1gk-cc - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 19:15:50.325 INFO: Waiting for the Cluster capz-e2e-y7t1gk/capz-e2e-y7t1gk-cc to be deleted STEP: Waiting for cluster capz-e2e-y7t1gk-cc to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 19:15:50.338 Jan 25 19:19:30.615: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-y7t1gk Jan 25 19:19:30.655: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/25/23 19:19:31.409 INFO: "with a single control plane node, one linux worker node, and one windows worker node" started at Wed, 25 Jan 2023 19:21:21 UTC on Ginkgo node 3 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/25/23 19:21:21.931 (11m44.807s)
Filter through log files | View test history on testgrid
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node