Recent runs || View in Spyglass
PR | jackfrancis: Update default k8s version to v1.25 for testing |
Result | FAILURE |
Tests | 2 failed / 25 succeeded |
Started | |
Elapsed | 1h0m |
Revision | aa4b89f70338b5bf172b792cbe9a26a0f73595d6 |
Refs |
3088 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\sGPU\-enabled\scluster\s\[OPTIONAL\]\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
[FAILED] Timed out after 1500.001s. Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-69wc5: I0125 15:59:23.936645 1 nfd-master.go:170] Node Feature Discovery Master v0.10.1 I0125 15:59:23.936741 1 nfd-master.go:174] NodeName: "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 15:59:23.936749 1 nfd-master.go:185] starting nfd LabelRule controller I0125 15:59:23.972670 1 nfd-master.go:226] gRPC server serving on port: 8080 I0125 15:59:45.616117 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:00:42.328925 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:00:45.703006 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:01:42.403725 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:01:45.732466 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:02:42.431951 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:02:45.803506 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:03:42.455626 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:03:45.828756 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:04:42.483690 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:04:45.861859 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:05:42.506717 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:05:45.895555 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:06:42.530363 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:06:45.925499 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:07:42.590435 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:07:45.967777 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:08:42.622179 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:08:45.997202 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:09:42.650533 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:09:46.037173 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:10:42.680523 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:10:46.091610 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:11:42.710517 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:11:46.157074 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:12:42.736434 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:12:46.224877 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:13:42.767969 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:13:46.261376 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:14:42.793477 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:14:46.292211 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:15:42.818028 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:15:46.325160 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:16:42.843632 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:16:46.354002 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:17:42.877228 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:17:46.385393 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:18:42.905323 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:18:46.412910 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:19:42.937276 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:19:46.441955 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:20:42.962212 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:20:46.475788 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:21:42.990079 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:21:46.501668 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:22:43.018794 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:22:46.531166 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:23:43.044549 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:23:46.557765 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:24:43.077270 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:24:46.584823 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:25:43.107200 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:25:46.613444 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" Logs for pod gpu-operator-node-feature-discovery-worker-8mwl2: I0125 16:00:42.295387 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 16:00:42.295462 1 nfd-worker.go:156] NodeName: 'capz-e2e-djq8wu-gpu-md-0-9gpqg' I0125 16:00:42.296063 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 16:00:42.296163 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 16:00:42.296215 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 16:00:42.296244 1 component.go:36] [core]parsed scheme: "" I0125 16:00:42.296252 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 16:00:42.296281 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 16:00:42.296305 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 16:00:42.296310 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 16:00:42.296340 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 16:00:42.296384 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 16:00:42.296680 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 16:00:42.302771 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 16:00:42.302788 1 component.go:36] [core]Channel Connectivity change to READY I0125 16:00:42.311456 1 nfd-worker.go:472] starting feature discovery... I0125 16:00:42.311563 1 nfd-worker.go:484] feature discovery completed I0125 16:00:42.311574 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:01:42.391521 1 nfd-worker.go:472] starting feature discovery... I0125 16:01:42.391635 1 nfd-worker.go:484] feature discovery completed I0125 16:01:42.391648 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:02:42.418533 1 nfd-worker.go:472] starting feature discovery... I0125 16:02:42.418683 1 nfd-worker.go:484] feature discovery completed I0125 16:02:42.418697 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:03:42.443537 1 nfd-worker.go:472] starting feature discovery... I0125 16:03:42.443678 1 nfd-worker.go:484] feature discovery completed I0125 16:03:42.443692 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:04:42.469403 1 nfd-worker.go:472] starting feature discovery... I0125 16:04:42.469513 1 nfd-worker.go:484] feature discovery completed I0125 16:04:42.469525 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:05:42.495318 1 nfd-worker.go:472] starting feature discovery... I0125 16:05:42.495428 1 nfd-worker.go:484] feature discovery completed I0125 16:05:42.495440 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:06:42.518884 1 nfd-worker.go:472] starting feature discovery... I0125 16:06:42.519008 1 nfd-worker.go:484] feature discovery completed I0125 16:06:42.519020 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:07:42.578721 1 nfd-worker.go:472] starting feature discovery... I0125 16:07:42.578869 1 nfd-worker.go:484] feature discovery completed I0125 16:07:42.578885 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:08:42.608191 1 nfd-worker.go:472] starting feature discovery... I0125 16:08:42.608304 1 nfd-worker.go:484] feature discovery completed I0125 16:08:42.608316 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:09:42.637460 1 nfd-worker.go:472] starting feature discovery... I0125 16:09:42.637577 1 nfd-worker.go:484] feature discovery completed I0125 16:09:42.637590 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:10:42.668728 1 nfd-worker.go:472] starting feature discovery... I0125 16:10:42.668868 1 nfd-worker.go:484] feature discovery completed I0125 16:10:42.668881 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:11:42.693967 1 nfd-worker.go:472] starting feature discovery... I0125 16:11:42.694081 1 nfd-worker.go:484] feature discovery completed I0125 16:11:42.694097 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:12:42.722798 1 nfd-worker.go:472] starting feature discovery... I0125 16:12:42.722921 1 nfd-worker.go:484] feature discovery completed I0125 16:12:42.722934 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:13:42.755332 1 nfd-worker.go:472] starting feature discovery... I0125 16:13:42.755447 1 nfd-worker.go:484] feature discovery completed I0125 16:13:42.755460 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:14:42.779939 1 nfd-worker.go:472] starting feature discovery... I0125 16:14:42.780052 1 nfd-worker.go:484] feature discovery completed I0125 16:14:42.780064 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:15:42.805576 1 nfd-worker.go:472] starting feature discovery... I0125 16:15:42.805686 1 nfd-worker.go:484] feature discovery completed I0125 16:15:42.805699 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:16:42.831281 1 nfd-worker.go:472] starting feature discovery... I0125 16:16:42.831398 1 nfd-worker.go:484] feature discovery completed I0125 16:16:42.831411 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:17:42.863703 1 nfd-worker.go:472] starting feature discovery... I0125 16:17:42.863816 1 nfd-worker.go:484] feature discovery completed I0125 16:17:42.863828 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:18:42.889613 1 nfd-worker.go:472] starting feature discovery... I0125 16:18:42.889728 1 nfd-worker.go:484] feature discovery completed I0125 16:18:42.889742 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:19:42.923446 1 nfd-worker.go:472] starting feature discovery... I0125 16:19:42.923555 1 nfd-worker.go:484] feature discovery completed I0125 16:19:42.923567 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:20:42.949462 1 nfd-worker.go:472] starting feature discovery... I0125 16:20:42.949579 1 nfd-worker.go:484] feature discovery completed I0125 16:20:42.949592 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:21:42.974434 1 nfd-worker.go:472] starting feature discovery... I0125 16:21:42.974559 1 nfd-worker.go:484] feature discovery completed I0125 16:21:42.974573 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:22:43.006383 1 nfd-worker.go:472] starting feature discovery... I0125 16:22:43.006508 1 nfd-worker.go:484] feature discovery completed I0125 16:22:43.006521 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:23:43.031661 1 nfd-worker.go:472] starting feature discovery... I0125 16:23:43.031775 1 nfd-worker.go:484] feature discovery completed I0125 16:23:43.031788 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:24:43.061127 1 nfd-worker.go:472] starting feature discovery... I0125 16:24:43.061239 1 nfd-worker.go:484] feature discovery completed I0125 16:24:43.061251 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:25:43.092901 1 nfd-worker.go:472] starting feature discovery... I0125 16:25:43.093012 1 nfd-worker.go:484] feature discovery completed I0125 16:25:43.093025 1 nfd-worker.go:565] sending labeling request to nfd-master Logs for pod gpu-operator-node-feature-discovery-worker-ht8x2: I0125 15:59:21.096606 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 15:59:21.096878 1 nfd-worker.go:156] NodeName: 'capz-e2e-djq8wu-gpu-control-plane-8r5t5' I0125 15:59:21.099150 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 15:59:21.099449 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 15:59:21.099700 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 15:59:21.100007 1 component.go:36] [core]parsed scheme: "" I0125 15:59:21.100119 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 15:59:21.100330 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 15:59:21.100515 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 15:59:21.100662 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 15:59:21.100793 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:21.101107 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:21.113287 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:21.130468 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:21.130642 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:21.130811 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:22.131260 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:22.131461 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:22.131509 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:22.132595 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:22.132617 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:22.132628 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:24.049154 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:24.049483 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:24.051148 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:24.052223 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:24.052266 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:24.052287 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:26.236205 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:26.236508 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:26.236710 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:26.238156 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:26.238229 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:26.238304 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:29.697926 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:29.698428 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:29.699313 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:29.700084 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:29.700195 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:29.700304 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:36.064418 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:36.064445 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:36.065089 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:36.065559 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:36.065572 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:36.065583 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:45.583025 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:45.583225 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:45.583575 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 15:59:45.585489 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 15:59:45.585512 1 component.go:36] [core]Channel Connectivity change to READY I0125 15:59:45.600304 1 nfd-worker.go:472] starting feature discovery... I0125 15:59:45.601208 1 nfd-worker.go:484] feature discovery completed I0125 15:59:45.601233 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:00:45.682747 1 nfd-worker.go:472] starting feature discovery... I0125 16:00:45.683756 1 nfd-worker.go:484] feature discovery completed I0125 16:00:45.683776 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:01:45.721228 1 nfd-worker.go:472] starting feature discovery... I0125 16:01:45.721539 1 nfd-worker.go:484] feature discovery completed I0125 16:01:45.721557 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:02:45.792895 1 nfd-worker.go:472] starting feature discovery... I0125 16:02:45.793280 1 nfd-worker.go:484] feature discovery completed I0125 16:02:45.793298 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:03:45.817590 1 nfd-worker.go:472] starting feature discovery... I0125 16:03:45.818013 1 nfd-worker.go:484] feature discovery completed I0125 16:03:45.818080 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:04:45.850591 1 nfd-worker.go:472] starting feature discovery... I0125 16:04:45.851045 1 nfd-worker.go:484] feature discovery completed I0125 16:04:45.851064 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:05:45.882382 1 nfd-worker.go:472] starting feature discovery... I0125 16:05:45.882672 1 nfd-worker.go:484] feature discovery completed I0125 16:05:45.882690 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:06:45.913613 1 nfd-worker.go:472] starting feature discovery... I0125 16:06:45.913878 1 nfd-worker.go:484] feature discovery completed I0125 16:06:45.913896 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:07:45.946210 1 nfd-worker.go:472] starting feature discovery... I0125 16:07:45.947841 1 nfd-worker.go:484] feature discovery completed I0125 16:07:45.947864 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:08:45.986224 1 nfd-worker.go:472] starting feature discovery... I0125 16:08:45.986441 1 nfd-worker.go:484] feature discovery completed I0125 16:08:45.986476 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:09:46.016091 1 nfd-worker.go:472] starting feature discovery... I0125 16:09:46.016258 1 nfd-worker.go:484] feature discovery completed I0125 16:09:46.016273 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:10:46.067445 1 nfd-worker.go:472] starting feature discovery... I0125 16:10:46.067711 1 nfd-worker.go:484] feature discovery completed I0125 16:10:46.067726 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:11:46.124150 1 nfd-worker.go:472] starting feature discovery... I0125 16:11:46.124726 1 nfd-worker.go:484] feature discovery completed I0125 16:11:46.124966 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:12:46.209350 1 nfd-worker.go:472] starting feature discovery... I0125 16:12:46.212063 1 nfd-worker.go:484] feature discovery completed I0125 16:12:46.212175 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:13:46.243455 1 nfd-worker.go:472] starting feature discovery... I0125 16:13:46.243797 1 nfd-worker.go:484] feature discovery completed I0125 16:13:46.243815 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:14:46.280243 1 nfd-worker.go:472] starting feature discovery... I0125 16:14:46.280610 1 nfd-worker.go:484] feature discovery completed I0125 16:14:46.280628 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:15:46.313705 1 nfd-worker.go:472] starting feature discovery... I0125 16:15:46.313929 1 nfd-worker.go:484] feature discovery completed I0125 16:15:46.313946 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:16:46.341098 1 nfd-worker.go:472] starting feature discovery... I0125 16:16:46.341381 1 nfd-worker.go:484] feature discovery completed I0125 16:16:46.341399 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:17:46.374743 1 nfd-worker.go:472] starting feature discovery... I0125 16:17:46.374923 1 nfd-worker.go:484] feature discovery completed I0125 16:17:46.374941 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:18:46.400309 1 nfd-worker.go:472] starting feature discovery... I0125 16:18:46.400592 1 nfd-worker.go:484] feature discovery completed I0125 16:18:46.400612 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:19:46.430387 1 nfd-worker.go:472] starting feature discovery... I0125 16:19:46.430678 1 nfd-worker.go:484] feature discovery completed I0125 16:19:46.430735 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:20:46.457561 1 nfd-worker.go:472] starting feature discovery... I0125 16:20:46.457900 1 nfd-worker.go:484] feature discovery completed I0125 16:20:46.457919 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:21:46.490477 1 nfd-worker.go:472] starting feature discovery... I0125 16:21:46.490844 1 nfd-worker.go:484] feature discovery completed I0125 16:21:46.490862 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:22:46.516439 1 nfd-worker.go:472] starting feature discovery... I0125 16:22:46.516896 1 nfd-worker.go:484] feature discovery completed I0125 16:22:46.516916 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:23:46.545859 1 nfd-worker.go:472] starting feature discovery... I0125 16:23:46.546034 1 nfd-worker.go:484] feature discovery completed I0125 16:23:46.546054 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:24:46.572938 1 nfd-worker.go:472] starting feature discovery... I0125 16:24:46.573070 1 nfd-worker.go:484] feature discovery completed I0125 16:24:46.573208 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:25:46.600193 1 nfd-worker.go:472] starting feature discovery... I0125 16:25:46.600544 1 nfd-worker.go:484] feature discovery completed I0125 16:25:46.600591 1 nfd-worker.go:565] sending labeling request to nfd-master Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/25/23 16:25:54.098from junit.e2e_suite.1.xml
2023/01/25 15:53:05 failed trying to get namespace (capz-e2e-djq8wu):namespaces "capz-e2e-djq8wu" not found cluster.cluster.x-k8s.io/capz-e2e-djq8wu-gpu serverside-applied azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-djq8wu-gpu serverside-applied kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-djq8wu-gpu-control-plane serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-djq8wu-gpu-control-plane serverside-applied azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp serverside-applied machinedeployment.cluster.x-k8s.io/capz-e2e-djq8wu-gpu-md-0 serverside-applied azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-djq8wu-gpu-md-0 serverside-applied kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-djq8wu-gpu-md-0 serverside-applied clusterresourceset.addons.cluster.x-k8s.io/crs-gpu-operator serverside-applied configmap/nvidia-clusterpolicy-crd serverside-applied configmap/nvidia-gpu-operator-components serverside-applied felixconfiguration.crd.projectcalico.org/default configured > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/25/23 15:53:05.236 INFO: "" started at Wed, 25 Jan 2023 15:53:05 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-djq8wu" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:53:05.237 Jan 25 15:53:05.237: INFO: starting to create namespace for hosting the "capz-e2e-djq8wu" test spec INFO: Creating namespace capz-e2e-djq8wu INFO: Creating event watcher for namespace "capz-e2e-djq8wu" Jan 25 15:53:05.358: INFO: Using existing cluster identity secret < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/25/23 15:53:05.358 (122ms) > Enter [It] with a single control plane node and 1 node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506 @ 01/25/23 15:53:05.358 INFO: Cluster name is capz-e2e-djq8wu-gpu INFO: Creating the workload cluster with name "capz-e2e-djq8wu-gpu" using the "nvidia-gpu" template (Kubernetes v1.25.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-djq8wu-gpu --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor nvidia-gpu INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/25/23 15:53:09.457 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/25/23 15:55:09.575 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/25/23 15:55:09.575 Jan 25 15:57:12.940: INFO: getting history for release projectcalico Jan 25 15:57:13.015: INFO: Release projectcalico does not exist, installing it Jan 25 15:57:14.061: INFO: creating 1 resource(s) Jan 25 15:57:14.149: INFO: creating 1 resource(s) Jan 25 15:57:14.240: INFO: creating 1 resource(s) Jan 25 15:57:14.323: INFO: creating 1 resource(s) Jan 25 15:57:14.419: INFO: creating 1 resource(s) Jan 25 15:57:14.536: INFO: creating 1 resource(s) Jan 25 15:57:15.079: INFO: creating 1 resource(s) Jan 25 15:57:15.195: INFO: creating 1 resource(s) Jan 25 15:57:15.265: INFO: creating 1 resource(s) Jan 25 15:57:15.342: INFO: creating 1 resource(s) Jan 25 15:57:15.417: INFO: creating 1 resource(s) Jan 25 15:57:15.496: INFO: creating 1 resource(s) Jan 25 15:57:15.572: INFO: creating 1 resource(s) Jan 25 15:57:15.652: INFO: creating 1 resource(s) Jan 25 15:57:15.722: INFO: creating 1 resource(s) Jan 25 15:57:15.803: INFO: creating 1 resource(s) Jan 25 15:57:15.912: INFO: creating 1 resource(s) Jan 25 15:57:16.002: INFO: creating 1 resource(s) Jan 25 15:57:16.116: INFO: creating 1 resource(s) Jan 25 15:57:16.250: INFO: creating 1 resource(s) Jan 25 15:57:16.650: INFO: creating 1 resource(s) Jan 25 15:57:16.728: INFO: Clearing discovery cache Jan 25 15:57:16.729: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 25 15:57:20.680: INFO: creating 1 resource(s) Jan 25 15:57:21.164: INFO: creating 6 resource(s) Jan 25 15:57:21.895: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/25/23 15:57:22.34 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:57:22.582 Jan 25 15:57:22.582: INFO: starting to wait for deployment to become available Jan 25 15:57:32.697: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.114984952s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/25/23 15:57:34.641 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:57:34.927 Jan 25 15:57:34.927: INFO: starting to wait for deployment to become available Jan 25 15:58:35.370: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m0.442659322s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:58:35.889 Jan 25 15:58:35.889: INFO: starting to wait for deployment to become available Jan 25 15:58:35.953: INFO: Deployment calico-system/calico-typha is now available, took 64.415734ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/25/23 15:58:35.953 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:58:36.394 Jan 25 15:58:36.394: INFO: starting to wait for deployment to become available Jan 25 15:59:46.876: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 1m10.482144532s INFO: Waiting for the first control plane machine managed by capz-e2e-djq8wu/capz-e2e-djq8wu-gpu-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/25/23 15:59:46.895 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 01/25/23 15:59:46.9 Jan 25 15:59:46.974: INFO: getting history for release azuredisk-csi-driver-oot Jan 25 15:59:47.031: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 25 15:59:50.221: INFO: creating 1 resource(s) Jan 25 15:59:50.363: INFO: creating 18 resource(s) Jan 25 15:59:50.922: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 01/25/23 15:59:50.94 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:59:51.175 Jan 25 15:59:51.175: INFO: starting to wait for deployment to become available Jan 25 16:00:31.510: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 40.334901658s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-djq8wu/capz-e2e-djq8wu-gpu-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 01/25/23 16:00:31.525 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 01/25/23 16:00:31.536 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/25/23 16:00:31.565 STEP: Checking all the machines controlled by capz-e2e-djq8wu-gpu-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 16:00:31.576 INFO: Waiting for the machine pools to be provisioned INFO: Calling PostMachinesProvisioned STEP: Waiting for all DaemonSet Pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/daemonsets.go:71 @ 01/25/23 16:00:31.669 STEP: waiting for 2 daemonset calico-system/calico-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 16:00:32.02 STEP: waiting for 2 daemonset calico-system/calico-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 16:00:42.08 Jan 25 16:00:42.080: INFO: 2 daemonset calico-system/calico-node pods are running, took 10.118033878s STEP: waiting for 2 daemonset calico-system/csi-node-driver pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 16:00:42.194 Jan 25 16:00:42.194: INFO: 2 daemonset calico-system/csi-node-driver pods are running, took 112.827165ms STEP: waiting for 2 daemonset gpu-operator-resources/gpu-operator-node-feature-discovery-worker pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 16:00:42.306 STEP: waiting for 2 daemonset gpu-operator-resources/gpu-operator-node-feature-discovery-worker pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 16:00:52.364 Jan 25 16:00:52.364: INFO: 2 daemonset gpu-operator-resources/gpu-operator-node-feature-discovery-worker pods are running, took 10.169404143s STEP: waiting for 2 daemonset kube-system/csi-azuredisk-node pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 16:00:52.423 Jan 25 16:00:52.423: INFO: 2 daemonset kube-system/csi-azuredisk-node pods are running, took 57.638158ms STEP: daemonset kube-system/csi-azuredisk-node-win has no schedulable nodes, will skip - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 16:00:52.482 STEP: waiting for 2 daemonset kube-system/kube-proxy pods to be Running - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 16:00:52.539 Jan 25 16:00:52.539: INFO: 2 daemonset kube-system/kube-proxy pods are running, took 57.015504ms STEP: Verifying expected VM extensions are present on the node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:544 @ 01/25/23 16:00:52.54 STEP: creating a Kubernetes client to the workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:62 @ 01/25/23 16:00:52.54 STEP: Retrieving all machines from the machine template spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:68 @ 01/25/23 16:00:52.573 Jan 25 16:00:52.573: INFO: Listing machines in namespace capz-e2e-djq8wu with label cluster.x-k8s.io/cluster-name=capz-e2e-djq8wu-gpu STEP: Creating a mapping of machine IDs to array of expected VM extensions - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:83 @ 01/25/23 16:00:52.578 STEP: Creating a VM and VM extension client - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:91 @ 01/25/23 16:00:52.578 STEP: Verifying specified VM extensions are created on Azure - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:108 @ 01/25/23 16:00:53.384 STEP: Retrieving all machine pools from the machine template spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_vmextensions.go:123 @ 01/25/23 16:00:53.806 Jan 25 16:00:53.806: INFO: Listing machine pools in namespace capz-e2e-djq8wu with label cluster.x-k8s.io/cluster-name=capz-e2e-djq8wu-gpu END STEP: Verifying expected VM extensions are present on the node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:544 @ 01/25/23 16:00:53.809 (1.27s) STEP: Running a GPU-based calculation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:554 @ 01/25/23 16:00:53.809 STEP: creating a Kubernetes client to the workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:62 @ 01/25/23 16:00:53.809 STEP: Waiting for a node to have an "nvidia.com/gpu" allocatable resource - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:68 @ 01/25/23 16:00:53.831 END STEP: Running a GPU-based calculation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:554 @ 01/25/23 16:25:54.098 (25m0.289s) [FAILED] Timed out after 1500.001s. Logs for pod gpu-operator-node-feature-discovery-master-77bc558fdc-69wc5: I0125 15:59:23.936645 1 nfd-master.go:170] Node Feature Discovery Master v0.10.1 I0125 15:59:23.936741 1 nfd-master.go:174] NodeName: "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 15:59:23.936749 1 nfd-master.go:185] starting nfd LabelRule controller I0125 15:59:23.972670 1 nfd-master.go:226] gRPC server serving on port: 8080 I0125 15:59:45.616117 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:00:42.328925 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:00:45.703006 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:01:42.403725 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:01:45.732466 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:02:42.431951 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:02:45.803506 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:03:42.455626 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:03:45.828756 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:04:42.483690 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:04:45.861859 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:05:42.506717 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:05:45.895555 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:06:42.530363 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:06:45.925499 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:07:42.590435 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:07:45.967777 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:08:42.622179 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:08:45.997202 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:09:42.650533 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:09:46.037173 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:10:42.680523 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:10:46.091610 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:11:42.710517 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:11:46.157074 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:12:42.736434 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:12:46.224877 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:13:42.767969 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:13:46.261376 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:14:42.793477 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:14:46.292211 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:15:42.818028 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:15:46.325160 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:16:42.843632 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:16:46.354002 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:17:42.877228 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:17:46.385393 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:18:42.905323 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:18:46.412910 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:19:42.937276 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:19:46.441955 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:20:42.962212 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:20:46.475788 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:21:42.990079 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:21:46.501668 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:22:43.018794 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:22:46.531166 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:23:43.044549 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:23:46.557765 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:24:43.077270 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:24:46.584823 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" I0125 16:25:43.107200 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-md-0-9gpqg" I0125 16:25:46.613444 1 nfd-master.go:423] received labeling request for node "capz-e2e-djq8wu-gpu-control-plane-8r5t5" Logs for pod gpu-operator-node-feature-discovery-worker-8mwl2: I0125 16:00:42.295387 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 16:00:42.295462 1 nfd-worker.go:156] NodeName: 'capz-e2e-djq8wu-gpu-md-0-9gpqg' I0125 16:00:42.296063 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 16:00:42.296163 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 16:00:42.296215 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 16:00:42.296244 1 component.go:36] [core]parsed scheme: "" I0125 16:00:42.296252 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 16:00:42.296281 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 16:00:42.296305 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 16:00:42.296310 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 16:00:42.296340 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 16:00:42.296384 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 16:00:42.296680 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 16:00:42.302771 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 16:00:42.302788 1 component.go:36] [core]Channel Connectivity change to READY I0125 16:00:42.311456 1 nfd-worker.go:472] starting feature discovery... I0125 16:00:42.311563 1 nfd-worker.go:484] feature discovery completed I0125 16:00:42.311574 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:01:42.391521 1 nfd-worker.go:472] starting feature discovery... I0125 16:01:42.391635 1 nfd-worker.go:484] feature discovery completed I0125 16:01:42.391648 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:02:42.418533 1 nfd-worker.go:472] starting feature discovery... I0125 16:02:42.418683 1 nfd-worker.go:484] feature discovery completed I0125 16:02:42.418697 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:03:42.443537 1 nfd-worker.go:472] starting feature discovery... I0125 16:03:42.443678 1 nfd-worker.go:484] feature discovery completed I0125 16:03:42.443692 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:04:42.469403 1 nfd-worker.go:472] starting feature discovery... I0125 16:04:42.469513 1 nfd-worker.go:484] feature discovery completed I0125 16:04:42.469525 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:05:42.495318 1 nfd-worker.go:472] starting feature discovery... I0125 16:05:42.495428 1 nfd-worker.go:484] feature discovery completed I0125 16:05:42.495440 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:06:42.518884 1 nfd-worker.go:472] starting feature discovery... I0125 16:06:42.519008 1 nfd-worker.go:484] feature discovery completed I0125 16:06:42.519020 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:07:42.578721 1 nfd-worker.go:472] starting feature discovery... I0125 16:07:42.578869 1 nfd-worker.go:484] feature discovery completed I0125 16:07:42.578885 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:08:42.608191 1 nfd-worker.go:472] starting feature discovery... I0125 16:08:42.608304 1 nfd-worker.go:484] feature discovery completed I0125 16:08:42.608316 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:09:42.637460 1 nfd-worker.go:472] starting feature discovery... I0125 16:09:42.637577 1 nfd-worker.go:484] feature discovery completed I0125 16:09:42.637590 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:10:42.668728 1 nfd-worker.go:472] starting feature discovery... I0125 16:10:42.668868 1 nfd-worker.go:484] feature discovery completed I0125 16:10:42.668881 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:11:42.693967 1 nfd-worker.go:472] starting feature discovery... I0125 16:11:42.694081 1 nfd-worker.go:484] feature discovery completed I0125 16:11:42.694097 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:12:42.722798 1 nfd-worker.go:472] starting feature discovery... I0125 16:12:42.722921 1 nfd-worker.go:484] feature discovery completed I0125 16:12:42.722934 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:13:42.755332 1 nfd-worker.go:472] starting feature discovery... I0125 16:13:42.755447 1 nfd-worker.go:484] feature discovery completed I0125 16:13:42.755460 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:14:42.779939 1 nfd-worker.go:472] starting feature discovery... I0125 16:14:42.780052 1 nfd-worker.go:484] feature discovery completed I0125 16:14:42.780064 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:15:42.805576 1 nfd-worker.go:472] starting feature discovery... I0125 16:15:42.805686 1 nfd-worker.go:484] feature discovery completed I0125 16:15:42.805699 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:16:42.831281 1 nfd-worker.go:472] starting feature discovery... I0125 16:16:42.831398 1 nfd-worker.go:484] feature discovery completed I0125 16:16:42.831411 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:17:42.863703 1 nfd-worker.go:472] starting feature discovery... I0125 16:17:42.863816 1 nfd-worker.go:484] feature discovery completed I0125 16:17:42.863828 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:18:42.889613 1 nfd-worker.go:472] starting feature discovery... I0125 16:18:42.889728 1 nfd-worker.go:484] feature discovery completed I0125 16:18:42.889742 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:19:42.923446 1 nfd-worker.go:472] starting feature discovery... I0125 16:19:42.923555 1 nfd-worker.go:484] feature discovery completed I0125 16:19:42.923567 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:20:42.949462 1 nfd-worker.go:472] starting feature discovery... I0125 16:20:42.949579 1 nfd-worker.go:484] feature discovery completed I0125 16:20:42.949592 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:21:42.974434 1 nfd-worker.go:472] starting feature discovery... I0125 16:21:42.974559 1 nfd-worker.go:484] feature discovery completed I0125 16:21:42.974573 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:22:43.006383 1 nfd-worker.go:472] starting feature discovery... I0125 16:22:43.006508 1 nfd-worker.go:484] feature discovery completed I0125 16:22:43.006521 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:23:43.031661 1 nfd-worker.go:472] starting feature discovery... I0125 16:23:43.031775 1 nfd-worker.go:484] feature discovery completed I0125 16:23:43.031788 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:24:43.061127 1 nfd-worker.go:472] starting feature discovery... I0125 16:24:43.061239 1 nfd-worker.go:484] feature discovery completed I0125 16:24:43.061251 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:25:43.092901 1 nfd-worker.go:472] starting feature discovery... I0125 16:25:43.093012 1 nfd-worker.go:484] feature discovery completed I0125 16:25:43.093025 1 nfd-worker.go:565] sending labeling request to nfd-master Logs for pod gpu-operator-node-feature-discovery-worker-ht8x2: I0125 15:59:21.096606 1 nfd-worker.go:155] Node Feature Discovery Worker v0.10.1 I0125 15:59:21.096878 1 nfd-worker.go:156] NodeName: 'capz-e2e-djq8wu-gpu-control-plane-8r5t5' I0125 15:59:21.099150 1 nfd-worker.go:423] configuration file "/etc/kubernetes/node-feature-discovery/nfd-worker.conf" parsed I0125 15:59:21.099449 1 nfd-worker.go:461] worker (re-)configuration successfully completed I0125 15:59:21.099700 1 base.go:126] connecting to nfd-master at gpu-operator-node-feature-discovery-master:8080 ... I0125 15:59:21.100007 1 component.go:36] [core]parsed scheme: "" I0125 15:59:21.100119 1 component.go:36] [core]scheme "" not registered, fallback to default scheme I0125 15:59:21.100330 1 component.go:36] [core]ccResolverWrapper: sending update to cc: {[{gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}] <nil> <nil>} I0125 15:59:21.100515 1 component.go:36] [core]ClientConn switching balancer to "pick_first" I0125 15:59:21.100662 1 component.go:36] [core]Channel switches to new LB policy "pick_first" I0125 15:59:21.100793 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:21.101107 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:21.113287 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:21.130468 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:21.130642 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:21.130811 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:22.131260 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:22.131461 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:22.131509 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:22.132595 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:22.132617 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:22.132628 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:24.049154 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:24.049483 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:24.051148 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:24.052223 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:24.052266 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:24.052287 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:26.236205 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:26.236508 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:26.236710 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:26.238156 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:26.238229 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:26.238304 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:29.697926 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:29.698428 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:29.699313 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:29.700084 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:29.700195 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:29.700304 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:36.064418 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:36.064445 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:36.065089 1 component.go:36] [core]Channel Connectivity change to CONNECTING W0125 15:59:36.065559 1 component.go:41] [core]grpc: addrConn.createTransport failed to connect to {gpu-operator-node-feature-discovery-master:8080 gpu-operator-node-feature-discovery-master:8080 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 10.102.228.73:8080: connect: connection refused". Reconnecting... I0125 15:59:36.065572 1 component.go:36] [core]Subchannel Connectivity change to TRANSIENT_FAILURE I0125 15:59:36.065583 1 component.go:36] [core]Channel Connectivity change to TRANSIENT_FAILURE I0125 15:59:45.583025 1 component.go:36] [core]Subchannel Connectivity change to CONNECTING I0125 15:59:45.583225 1 component.go:36] [core]Subchannel picks a new address "gpu-operator-node-feature-discovery-master:8080" to connect I0125 15:59:45.583575 1 component.go:36] [core]Channel Connectivity change to CONNECTING I0125 15:59:45.585489 1 component.go:36] [core]Subchannel Connectivity change to READY I0125 15:59:45.585512 1 component.go:36] [core]Channel Connectivity change to READY I0125 15:59:45.600304 1 nfd-worker.go:472] starting feature discovery... I0125 15:59:45.601208 1 nfd-worker.go:484] feature discovery completed I0125 15:59:45.601233 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:00:45.682747 1 nfd-worker.go:472] starting feature discovery... I0125 16:00:45.683756 1 nfd-worker.go:484] feature discovery completed I0125 16:00:45.683776 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:01:45.721228 1 nfd-worker.go:472] starting feature discovery... I0125 16:01:45.721539 1 nfd-worker.go:484] feature discovery completed I0125 16:01:45.721557 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:02:45.792895 1 nfd-worker.go:472] starting feature discovery... I0125 16:02:45.793280 1 nfd-worker.go:484] feature discovery completed I0125 16:02:45.793298 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:03:45.817590 1 nfd-worker.go:472] starting feature discovery... I0125 16:03:45.818013 1 nfd-worker.go:484] feature discovery completed I0125 16:03:45.818080 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:04:45.850591 1 nfd-worker.go:472] starting feature discovery... I0125 16:04:45.851045 1 nfd-worker.go:484] feature discovery completed I0125 16:04:45.851064 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:05:45.882382 1 nfd-worker.go:472] starting feature discovery... I0125 16:05:45.882672 1 nfd-worker.go:484] feature discovery completed I0125 16:05:45.882690 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:06:45.913613 1 nfd-worker.go:472] starting feature discovery... I0125 16:06:45.913878 1 nfd-worker.go:484] feature discovery completed I0125 16:06:45.913896 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:07:45.946210 1 nfd-worker.go:472] starting feature discovery... I0125 16:07:45.947841 1 nfd-worker.go:484] feature discovery completed I0125 16:07:45.947864 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:08:45.986224 1 nfd-worker.go:472] starting feature discovery... I0125 16:08:45.986441 1 nfd-worker.go:484] feature discovery completed I0125 16:08:45.986476 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:09:46.016091 1 nfd-worker.go:472] starting feature discovery... I0125 16:09:46.016258 1 nfd-worker.go:484] feature discovery completed I0125 16:09:46.016273 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:10:46.067445 1 nfd-worker.go:472] starting feature discovery... I0125 16:10:46.067711 1 nfd-worker.go:484] feature discovery completed I0125 16:10:46.067726 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:11:46.124150 1 nfd-worker.go:472] starting feature discovery... I0125 16:11:46.124726 1 nfd-worker.go:484] feature discovery completed I0125 16:11:46.124966 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:12:46.209350 1 nfd-worker.go:472] starting feature discovery... I0125 16:12:46.212063 1 nfd-worker.go:484] feature discovery completed I0125 16:12:46.212175 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:13:46.243455 1 nfd-worker.go:472] starting feature discovery... I0125 16:13:46.243797 1 nfd-worker.go:484] feature discovery completed I0125 16:13:46.243815 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:14:46.280243 1 nfd-worker.go:472] starting feature discovery... I0125 16:14:46.280610 1 nfd-worker.go:484] feature discovery completed I0125 16:14:46.280628 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:15:46.313705 1 nfd-worker.go:472] starting feature discovery... I0125 16:15:46.313929 1 nfd-worker.go:484] feature discovery completed I0125 16:15:46.313946 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:16:46.341098 1 nfd-worker.go:472] starting feature discovery... I0125 16:16:46.341381 1 nfd-worker.go:484] feature discovery completed I0125 16:16:46.341399 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:17:46.374743 1 nfd-worker.go:472] starting feature discovery... I0125 16:17:46.374923 1 nfd-worker.go:484] feature discovery completed I0125 16:17:46.374941 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:18:46.400309 1 nfd-worker.go:472] starting feature discovery... I0125 16:18:46.400592 1 nfd-worker.go:484] feature discovery completed I0125 16:18:46.400612 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:19:46.430387 1 nfd-worker.go:472] starting feature discovery... I0125 16:19:46.430678 1 nfd-worker.go:484] feature discovery completed I0125 16:19:46.430735 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:20:46.457561 1 nfd-worker.go:472] starting feature discovery... I0125 16:20:46.457900 1 nfd-worker.go:484] feature discovery completed I0125 16:20:46.457919 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:21:46.490477 1 nfd-worker.go:472] starting feature discovery... I0125 16:21:46.490844 1 nfd-worker.go:484] feature discovery completed I0125 16:21:46.490862 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:22:46.516439 1 nfd-worker.go:472] starting feature discovery... I0125 16:22:46.516896 1 nfd-worker.go:484] feature discovery completed I0125 16:22:46.516916 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:23:46.545859 1 nfd-worker.go:472] starting feature discovery... I0125 16:23:46.546034 1 nfd-worker.go:484] feature discovery completed I0125 16:23:46.546054 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:24:46.572938 1 nfd-worker.go:472] starting feature discovery... I0125 16:24:46.573070 1 nfd-worker.go:484] feature discovery completed I0125 16:24:46.573208 1 nfd-worker.go:565] sending labeling request to nfd-master I0125 16:25:46.600193 1 nfd-worker.go:472] starting feature discovery... I0125 16:25:46.600544 1 nfd-worker.go:484] feature discovery completed I0125 16:25:46.600591 1 nfd-worker.go:565] sending labeling request to nfd-master Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_gpu.go:80 @ 01/25/23 16:25:54.098 < Exit [It] with a single control plane node and 1 node - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:506 @ 01/25/23 16:25:54.098 (32m48.74s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/25/23 16:25:54.098 Jan 25 16:25:54.098: INFO: FAILED! Jan 25 16:25:54.098: INFO: Cleaning up after "Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node" spec STEP: Dumping logs from the "capz-e2e-djq8wu-gpu" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 16:25:54.098 Jan 25 16:25:54.098: INFO: Dumping workload cluster capz-e2e-djq8wu/capz-e2e-djq8wu-gpu logs Jan 25 16:25:54.142: INFO: Collecting logs for Linux node capz-e2e-djq8wu-gpu-control-plane-8r5t5 in cluster capz-e2e-djq8wu-gpu in namespace capz-e2e-djq8wu Jan 25 16:26:18.969: INFO: Collecting boot logs for AzureMachine capz-e2e-djq8wu-gpu-control-plane-8r5t5 Jan 25 16:26:20.334: INFO: Collecting logs for Linux node capz-e2e-djq8wu-gpu-md-0-9gpqg in cluster capz-e2e-djq8wu-gpu in namespace capz-e2e-djq8wu Jan 25 16:26:28.544: INFO: Collecting boot logs for AzureMachine capz-e2e-djq8wu-gpu-md-0-9gpqg Jan 25 16:26:29.175: INFO: Dumping workload cluster capz-e2e-djq8wu/capz-e2e-djq8wu-gpu kube-system pod logs Jan 25 16:26:29.622: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-75bc4ddfcc-lm9zb Jan 25 16:26:29.622: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-75bc4ddfcc-n2d8h, container calico-apiserver Jan 25 16:26:29.622: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-75bc4ddfcc-lm9zb, container calico-apiserver Jan 25 16:26:29.623: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-75bc4ddfcc-n2d8h Jan 25 16:26:29.688: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-5f9dc85578-9cp8z, container calico-kube-controllers Jan 25 16:26:29.688: INFO: Collecting events for Pod calico-system/calico-kube-controllers-5f9dc85578-9cp8z Jan 25 16:26:29.688: INFO: Collecting events for Pod calico-system/calico-typha-98649496-zxq7g Jan 25 16:26:29.688: INFO: Collecting events for Pod calico-system/csi-node-driver-hg7xc Jan 25 16:26:29.688: INFO: Creating log watcher for controller calico-system/calico-node-dr62b, container calico-node Jan 25 16:26:29.688: INFO: Creating log watcher for controller calico-system/calico-node-tq856, container calico-node Jan 25 16:26:29.688: INFO: Creating log watcher for controller calico-system/csi-node-driver-hg7xc, container calico-csi Jan 25 16:26:29.689: INFO: Creating log watcher for controller calico-system/csi-node-driver-hg7xc, container csi-node-driver-registrar Jan 25 16:26:29.689: INFO: Creating log watcher for controller calico-system/csi-node-driver-vgmmt, container csi-node-driver-registrar Jan 25 16:26:29.689: INFO: Collecting events for Pod calico-system/calico-node-tq856 Jan 25 16:26:29.689: INFO: Creating log watcher for controller calico-system/csi-node-driver-vgmmt, container calico-csi Jan 25 16:26:29.689: INFO: Creating log watcher for controller calico-system/calico-typha-98649496-zxq7g, container calico-typha Jan 25 16:26:29.690: INFO: Collecting events for Pod calico-system/calico-node-dr62b Jan 25 16:26:29.690: INFO: Collecting events for Pod calico-system/csi-node-driver-vgmmt Jan 25 16:26:29.751: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-bcf6cd75d-dct9x Jan 25 16:26:29.751: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-worker-8mwl2, container worker Jan 25 16:26:29.751: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-master-77bc558fdc-69wc5, container master Jan 25 16:26:29.751: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-master-77bc558fdc-69wc5 Jan 25 16:26:29.751: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-node-feature-discovery-worker-ht8x2, container worker Jan 25 16:26:29.751: INFO: Creating log watcher for controller gpu-operator-resources/gpu-operator-bcf6cd75d-dct9x, container gpu-operator Jan 25 16:26:29.751: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-worker-ht8x2 Jan 25 16:26:29.752: INFO: Collecting events for Pod gpu-operator-resources/gpu-operator-node-feature-discovery-worker-8mwl2 Jan 25 16:26:29.834: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-7hrcg, container coredns Jan 25 16:26:29.834: INFO: Collecting events for Pod kube-system/coredns-565d847f94-7hrcg Jan 25 16:26:29.835: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-f6cvv, container liveness-probe Jan 25 16:26:29.836: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-grkdl, container liveness-probe Jan 25 16:26:29.836: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-f6cvv, container node-driver-registrar Jan 25 16:26:29.836: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-grkdl, container azuredisk Jan 25 16:26:29.837: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-f6cvv, container azuredisk Jan 25 16:26:29.837: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-vtvck, container coredns Jan 25 16:26:29.837: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-djq8wu-gpu-control-plane-8r5t5 Jan 25 16:26:29.837: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-grkdl, container csi-resizer Jan 25 16:26:29.837: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-f6cvv Jan 25 16:26:29.837: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-djq8wu-gpu-control-plane-8r5t5, container etcd Jan 25 16:26:29.837: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-6dbd9768d6-grkdl Jan 25 16:26:29.838: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-6ksfj, container liveness-probe Jan 25 16:26:29.838: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-grkdl, container csi-snapshotter Jan 25 16:26:29.838: INFO: Collecting events for Pod kube-system/etcd-capz-e2e-djq8wu-gpu-control-plane-8r5t5 Jan 25 16:26:29.838: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-djq8wu-gpu-control-plane-8r5t5, container kube-apiserver Jan 25 16:26:29.838: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-djq8wu-gpu-control-plane-8r5t5 Jan 25 16:26:29.838: INFO: Collecting events for Pod kube-system/coredns-565d847f94-vtvck Jan 25 16:26:29.838: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-grkdl, container csi-provisioner Jan 25 16:26:29.838: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbd9768d6-grkdl, container csi-attacher Jan 25 16:26:29.838: INFO: Creating log watcher for controller kube-system/kube-proxy-c76ts, container kube-proxy Jan 25 16:26:29.839: INFO: Collecting events for Pod kube-system/kube-proxy-xmhsm Jan 25 16:26:29.839: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-djq8wu-gpu-control-plane-8r5t5, container kube-controller-manager Jan 25 16:26:29.839: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-6ksfj, container node-driver-registrar Jan 25 16:26:29.839: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-djq8wu-gpu-control-plane-8r5t5, container kube-scheduler Jan 25 16:26:29.839: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-6ksfj, container azuredisk Jan 25 16:26:29.839: INFO: Collecting events for Pod kube-system/kube-proxy-c76ts Jan 25 16:26:29.839: INFO: Creating log watcher for controller kube-system/kube-proxy-xmhsm, container kube-proxy Jan 25 16:26:29.839: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-6ksfj Jan 25 16:26:29.839: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-djq8wu-gpu-control-plane-8r5t5 Jan 25 16:26:29.897: INFO: Fetching kube-system pod logs took 721.886151ms Jan 25 16:26:29.897: INFO: Dumping workload cluster capz-e2e-djq8wu/capz-e2e-djq8wu-gpu Azure activity log Jan 25 16:26:29.897: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-pwz8v, container tigera-operator Jan 25 16:26:29.897: INFO: Collecting events for Pod tigera-operator/tigera-operator-64db64cb98-pwz8v Jan 25 16:26:33.914: INFO: Fetching activity logs took 4.017622489s Jan 25 16:26:33.914: INFO: Dumping all the Cluster API resources in the "capz-e2e-djq8wu" namespace Jan 25 16:26:34.246: INFO: Deleting all clusters in the capz-e2e-djq8wu namespace STEP: Deleting cluster capz-e2e-djq8wu-gpu - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 16:26:34.263 INFO: Waiting for the Cluster capz-e2e-djq8wu/capz-e2e-djq8wu-gpu to be deleted STEP: Waiting for cluster capz-e2e-djq8wu-gpu to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 16:26:34.276 Jan 25 16:32:14.463: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-djq8wu Jan 25 16:32:14.481: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/25/23 16:32:15.126 INFO: "with a single control plane node and 1 node" started at Wed, 25 Jan 2023 16:33:33 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/25/23 16:33:33.154 (7m39.056s)
Find gpu-operator-node-feature-discovery-master-77bc558fdc-69wc5 mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sWorkload\scluster\screation\sCreating\sa\scluster\sthat\suses\sthe\sexternal\scloud\sprovider\sand\smachinepools\s\[OPTIONAL\]\swith\s1\scontrol\splane\snode\sand\s1\smachinepool$'
[FAILED] Timed out after 1800.000s. Timed out waiting for 1 ready replicas for MachinePool capz-e2e-6tolc9/capz-e2e-6tolc9-flex-mp-0 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinepool_helpers.go:91 @ 01/25/23 16:30:30.211from junit.e2e_suite.1.xml
2023/01/25 15:53:05 failed trying to get namespace (capz-e2e-6tolc9):namespaces "capz-e2e-6tolc9" not found cluster.cluster.x-k8s.io/capz-e2e-6tolc9-flex created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-6tolc9-flex created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-6tolc9-flex-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-6tolc9-flex-control-plane created machinepool.cluster.x-k8s.io/capz-e2e-6tolc9-flex-mp-0 created azuremachinepool.infrastructure.cluster.x-k8s.io/capz-e2e-6tolc9-flex-mp-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/capz-e2e-6tolc9-flex-mp-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created felixconfiguration.crd.projectcalico.org/default configured > Enter [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/25/23 15:53:05.236 INFO: "" started at Wed, 25 Jan 2023 15:53:05 UTC on Ginkgo node 3 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml STEP: Creating namespace "capz-e2e-6tolc9" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:53:05.237 Jan 25 15:53:05.237: INFO: starting to create namespace for hosting the "capz-e2e-6tolc9" test spec INFO: Creating namespace capz-e2e-6tolc9 INFO: Creating event watcher for namespace "capz-e2e-6tolc9" Jan 25 15:53:05.358: INFO: Using existing cluster identity secret < Exit [BeforeEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:56 @ 01/25/23 15:53:05.359 (122ms) > Enter [It] with 1 control plane node and 1 machinepool - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:573 @ 01/25/23 15:53:05.359 STEP: using user-assigned identity - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:574 @ 01/25/23 15:53:05.359 INFO: Cluster name is capz-e2e-6tolc9-flex INFO: Creating the workload cluster with name "capz-e2e-6tolc9-flex" using the "external-cloud-provider-vmss-flex" template (Kubernetes v1.26.0, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-6tolc9-flex --infrastructure (default) --kubernetes-version v1.26.0 --control-plane-machine-count 1 --worker-machine-count 1 --flavor external-cloud-provider-vmss-flex INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/25/23 15:53:09.379 INFO: Waiting for control plane to be initialized STEP: Installing cloud-provider-azure components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:45 @ 01/25/23 15:55:29.523 Jan 25 15:58:09.933: INFO: getting history for release cloud-provider-azure-oot Jan 25 15:58:10.041: INFO: Release cloud-provider-azure-oot does not exist, installing it Jan 25 15:58:13.016: INFO: creating 1 resource(s) Jan 25 15:58:13.270: INFO: creating 10 resource(s) Jan 25 15:58:14.105: INFO: Install complete STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/25/23 15:58:14.105 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/25/23 15:58:14.105 Jan 25 15:58:14.229: INFO: getting history for release projectcalico Jan 25 15:58:14.337: INFO: Release projectcalico does not exist, installing it Jan 25 15:58:15.069: INFO: creating 1 resource(s) Jan 25 15:58:15.197: INFO: creating 1 resource(s) Jan 25 15:58:15.319: INFO: creating 1 resource(s) Jan 25 15:58:15.445: INFO: creating 1 resource(s) Jan 25 15:58:15.571: INFO: creating 1 resource(s) Jan 25 15:58:15.691: INFO: creating 1 resource(s) Jan 25 15:58:15.941: INFO: creating 1 resource(s) Jan 25 15:58:16.079: INFO: creating 1 resource(s) Jan 25 15:58:16.203: INFO: creating 1 resource(s) Jan 25 15:58:16.334: INFO: creating 1 resource(s) Jan 25 15:58:16.455: INFO: creating 1 resource(s) Jan 25 15:58:16.569: INFO: creating 1 resource(s) Jan 25 15:58:16.699: INFO: creating 1 resource(s) Jan 25 15:58:16.816: INFO: creating 1 resource(s) Jan 25 15:58:16.934: INFO: creating 1 resource(s) Jan 25 15:58:17.062: INFO: creating 1 resource(s) Jan 25 15:58:17.197: INFO: creating 1 resource(s) Jan 25 15:58:17.321: INFO: creating 1 resource(s) Jan 25 15:58:17.460: INFO: creating 1 resource(s) Jan 25 15:58:17.645: INFO: creating 1 resource(s) Jan 25 15:58:18.161: INFO: creating 1 resource(s) Jan 25 15:58:18.292: INFO: Clearing discovery cache Jan 25 15:58:18.292: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 25 15:58:23.297: INFO: creating 1 resource(s) Jan 25 15:58:24.373: INFO: creating 6 resource(s) Jan 25 15:58:25.548: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/25/23 15:58:26.364 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:58:26.798 Jan 25 15:58:26.798: INFO: starting to wait for deployment to become available Jan 25 15:58:37.016: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.218091424s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/25/23 15:58:38.224 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:58:38.76 Jan 25 15:58:38.760: INFO: starting to wait for deployment to become available Jan 25 15:59:29.720: INFO: Deployment calico-system/calico-kube-controllers is now available, took 50.96027895s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:59:30.267 Jan 25 15:59:30.267: INFO: starting to wait for deployment to become available Jan 25 15:59:30.378: INFO: Deployment calico-system/calico-typha is now available, took 110.372917ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/25/23 15:59:30.378 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:59:31.244 Jan 25 15:59:31.244: INFO: starting to wait for deployment to become available Jan 25 15:59:51.567: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.322992408s STEP: Waiting for Ready cloud-controller-manager deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:55 @ 01/25/23 15:59:51.586 STEP: waiting for deployment kube-system/cloud-controller-manager to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:59:52.546 Jan 25 15:59:52.546: INFO: starting to wait for deployment to become available Jan 25 15:59:52.653: INFO: Deployment kube-system/cloud-controller-manager is now available, took 107.085657ms INFO: Waiting for the first control plane machine managed by capz-e2e-6tolc9/capz-e2e-6tolc9-flex-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/25/23 15:59:52.676 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 01/25/23 15:59:52.681 Jan 25 15:59:52.805: INFO: getting history for release azuredisk-csi-driver-oot Jan 25 15:59:52.912: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 25 15:59:58.016: INFO: creating 1 resource(s) Jan 25 15:59:58.368: INFO: creating 18 resource(s) Jan 25 15:59:59.217: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 01/25/23 15:59:59.235 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 15:59:59.674 Jan 25 15:59:59.674: INFO: starting to wait for deployment to become available Jan 25 16:00:30.121: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 30.447091235s INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-6tolc9/capz-e2e-6tolc9-flex-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 01/25/23 16:00:30.137 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 01/25/23 16:00:30.145 INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinepool_helpers.go:79 @ 01/25/23 16:00:30.21 [FAILED] Timed out after 1800.000s. Timed out waiting for 1 ready replicas for MachinePool capz-e2e-6tolc9/capz-e2e-6tolc9-flex-mp-0 Expected <int>: 0 to equal <int>: 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinepool_helpers.go:91 @ 01/25/23 16:30:30.211 < Exit [It] with 1 control plane node and 1 machinepool - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:573 @ 01/25/23 16:30:30.211 (37m24.852s) > Enter [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/25/23 16:30:30.211 Jan 25 16:30:30.211: INFO: FAILED! Jan 25 16:30:30.211: INFO: Cleaning up after "Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool" spec STEP: Dumping logs from the "capz-e2e-6tolc9-flex" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/25/23 16:30:30.211 Jan 25 16:30:30.211: INFO: Dumping workload cluster capz-e2e-6tolc9/capz-e2e-6tolc9-flex logs Jan 25 16:30:30.252: INFO: Collecting logs for Linux node capz-e2e-6tolc9-flex-control-plane-q52ds in cluster capz-e2e-6tolc9-flex in namespace capz-e2e-6tolc9 Jan 25 16:30:53.739: INFO: Collecting boot logs for AzureMachine capz-e2e-6tolc9-flex-control-plane-q52ds Jan 25 16:30:55.566: INFO: Dumping workload cluster capz-e2e-6tolc9/capz-e2e-6tolc9-flex kube-system pod logs Jan 25 16:30:56.764: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-54966b4d6d-2fd9k, container calico-apiserver Jan 25 16:30:56.764: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-54966b4d6d-2fd9k Jan 25 16:30:56.765: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-54966b4d6d-84dgl, container calico-apiserver Jan 25 16:30:56.765: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-54966b4d6d-84dgl Jan 25 16:30:56.901: INFO: Collecting events for Pod calico-system/calico-node-kkfg5 Jan 25 16:30:56.901: INFO: Collecting events for Pod calico-system/calico-typha-6664bb9d65-2kswq Jan 25 16:30:56.901: INFO: Creating log watcher for controller calico-system/csi-node-driver-6ndzp, container calico-csi Jan 25 16:30:56.901: INFO: Collecting events for Pod calico-system/calico-kube-controllers-6b7b9c649d-96ns8 Jan 25 16:30:56.901: INFO: Creating log watcher for controller calico-system/calico-node-kkfg5, container calico-node Jan 25 16:30:56.902: INFO: Creating log watcher for controller calico-system/csi-node-driver-6ndzp, container csi-node-driver-registrar Jan 25 16:30:56.902: INFO: Creating log watcher for controller calico-system/calico-typha-6664bb9d65-2kswq, container calico-typha Jan 25 16:30:56.902: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-6b7b9c649d-96ns8, container calico-kube-controllers Jan 25 16:30:56.902: INFO: Collecting events for Pod calico-system/csi-node-driver-6ndzp Jan 25 16:30:57.074: INFO: Collecting events for Pod kube-system/cloud-controller-manager-6d589cdbd-gn6kp Jan 25 16:30:57.074: INFO: Creating log watcher for controller kube-system/coredns-787d4945fb-dwghg, container coredns Jan 25 16:30:57.074: INFO: Creating log watcher for controller kube-system/cloud-node-manager-hh9nl, container cloud-node-manager Jan 25 16:30:57.074: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-45465, container node-driver-registrar Jan 25 16:30:57.074: INFO: Creating log watcher for controller kube-system/kube-proxy-h6m5t, container kube-proxy Jan 25 16:30:57.074: INFO: Collecting events for Pod kube-system/kube-apiserver-capz-e2e-6tolc9-flex-control-plane-q52ds Jan 25 16:30:57.074: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-45465, container azuredisk Jan 25 16:30:57.076: INFO: Collecting events for Pod kube-system/csi-azuredisk-node-45465 Jan 25 16:30:57.076: INFO: Collecting events for Pod kube-system/coredns-787d4945fb-dwghg Jan 25 16:30:57.076: INFO: Creating log watcher for controller kube-system/etcd-capz-e2e-6tolc9-flex-control-plane-q52ds, container etcd Jan 25 16:30:57.076: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-6tolc9-flex-control-plane-q52ds, container kube-controller-manager Jan 25 16:30:57.076: INFO: Creating log watcher for controller kube-system/coredns-787d4945fb-zf2jz, container coredns Jan 25 16:30:57.076: INFO: Collecting events for Pod kube-system/kube-proxy-h6m5t Jan 25 16:30:57.076: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-6tolc9-flex-control-plane-q52ds, container kube-scheduler Jan 25 16:30:57.076: INFO: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-6tolc9-flex-control-plane-q52ds Jan 25 16:30:57.076: INFO: Collecting events for Pod kube-system/cloud-node-manager-hh9nl Jan 25 16:30:57.076: INFO: Collecting events for Pod kube-system/etcd-capz-e2e-6tolc9-flex-control-plane-q52ds Jan 25 16:30:57.076: INFO: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-6tolc9-flex-control-plane-q52ds Jan 25 16:30:57.076: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-6tolc9-flex-control-plane-q52ds, container kube-apiserver Jan 25 16:30:57.077: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7c87ff77db-knqrr, container csi-resizer Jan 25 16:30:57.077: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7c87ff77db-knqrr, container csi-provisioner Jan 25 16:30:57.078: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7c87ff77db-knqrr, container liveness-probe Jan 25 16:30:57.078: INFO: Collecting events for Pod kube-system/csi-azuredisk-controller-7c87ff77db-knqrr Jan 25 16:30:57.078: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7c87ff77db-knqrr, container azuredisk Jan 25 16:30:57.078: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-45465, container liveness-probe Jan 25 16:30:57.078: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7c87ff77db-knqrr, container csi-attacher Jan 25 16:30:57.078: INFO: Collecting events for Pod kube-system/coredns-787d4945fb-zf2jz Jan 25 16:30:57.079: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7c87ff77db-knqrr, container csi-snapshotter Jan 25 16:30:57.079: INFO: Creating log watcher for controller kube-system/cloud-controller-manager-6d589cdbd-gn6kp, container cloud-controller-manager Jan 25 16:30:57.229: INFO: Fetching kube-system pod logs took 1.662857018s Jan 25 16:30:57.229: INFO: Dumping workload cluster capz-e2e-6tolc9/capz-e2e-6tolc9-flex Azure activity log Jan 25 16:30:57.229: INFO: Creating log watcher for controller tigera-operator/tigera-operator-54b47459dd-8fg5w, container tigera-operator Jan 25 16:30:57.229: INFO: Collecting events for Pod tigera-operator/tigera-operator-54b47459dd-8fg5w Jan 25 16:31:02.809: INFO: Fetching activity logs took 5.579620678s Jan 25 16:31:02.809: INFO: Dumping all the Cluster API resources in the "capz-e2e-6tolc9" namespace Jan 25 16:31:03.175: INFO: Deleting all clusters in the capz-e2e-6tolc9 namespace STEP: Deleting cluster capz-e2e-6tolc9-flex - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 16:31:03.197 INFO: Waiting for the Cluster capz-e2e-6tolc9/capz-e2e-6tolc9-flex to be deleted STEP: Waiting for cluster capz-e2e-6tolc9-flex to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/25/23 16:31:03.211 Jan 25 16:36:33.408: INFO: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-6tolc9 Jan 25 16:36:33.427: INFO: Checking if any resources are left over in Azure for spec "create-workload-cluster" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/25/23 16:36:34.09 INFO: "with 1 control plane node and 1 machinepool" started at Wed, 25 Jan 2023 16:38:03 UTC on Ginkgo node 3 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Workload cluster creation - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:117 @ 01/25/23 16:38:03.292 (7m33.082s)
Filter through log files | View test history on testgrid
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider