This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 8 succeeded
Started2022-04-12 19:31
Elapsed2h7m
Revisionrelease-0.5

Test Failures


capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 49m46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543
Timed out after 900.001s.
Expected
    <int>: 0
to equal
    <int>: 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/machinepool_helpers.go:85
				
				Click to see stdout/stderrfrom junit.e2e_suite.2.xml

Filter through log files | View test history on testgrid


Show 8 Passed Tests

Show 13 Skipped Tests

Error lines from build-log.txt

... skipping 433 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:269

INFO: "With ipv6 worker node" started at Tue, 12 Apr 2022 19:39:13 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-m5hnk5" for hosting the cluster
Apr 12 19:39:13.060: INFO: starting to create namespace for hosting the "capz-e2e-m5hnk5" test spec
2022/04/12 19:39:13 failed trying to get namespace (capz-e2e-m5hnk5):namespaces "capz-e2e-m5hnk5" not found
INFO: Creating namespace capz-e2e-m5hnk5
INFO: Creating event watcher for namespace "capz-e2e-m5hnk5"
Apr 12 19:39:13.108: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-m5hnk5-ipv6
INFO: Creating the workload cluster with name "capz-e2e-m5hnk5-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 564.002273ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-m5hnk5" namespace
STEP: Deleting all clusters in the capz-e2e-m5hnk5 namespace
STEP: Deleting cluster capz-e2e-m5hnk5-ipv6
INFO: Waiting for the Cluster capz-e2e-m5hnk5/capz-e2e-m5hnk5-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-m5hnk5-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-z6hzh, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-m5hnk5-ipv6-control-plane-57h8r, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-m5hnk5-ipv6-control-plane-jc9lh, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pxgxc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-m5hnk5-ipv6-control-plane-jc9lh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-vsjbg, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-m5hnk5-ipv6-control-plane-dzvpv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vnt8l, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-m5hnk5-ipv6-control-plane-57h8r, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-7slrj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-m5hnk5-ipv6-control-plane-jc9lh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-m5hnk5-ipv6-control-plane-jc9lh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-m5hnk5-ipv6-control-plane-57h8r, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-m5hnk5-ipv6-control-plane-dzvpv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7k6jw, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-m5hnk5-ipv6-control-plane-dzvpv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-725wv, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dpwgj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-m5hnk5-ipv6-control-plane-57h8r, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-7n4kj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-xncbn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fcqcb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-m5hnk5-ipv6-control-plane-dzvpv, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-m5hnk5
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 17m39s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:315

INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" started at Tue, 12 Apr 2022 19:56:52 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-8qaxkx" for hosting the cluster
Apr 12 19:56:52.158: INFO: starting to create namespace for hosting the "capz-e2e-8qaxkx" test spec
2022/04/12 19:56:52 failed trying to get namespace (capz-e2e-8qaxkx):namespaces "capz-e2e-8qaxkx" not found
INFO: Creating namespace capz-e2e-8qaxkx
INFO: Creating event watcher for namespace "capz-e2e-8qaxkx"
Apr 12 19:56:52.201: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-8qaxkx-vmss
INFO: Creating the workload cluster with name "capz-e2e-8qaxkx-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 106 lines ...
STEP: Fetching activity logs took 601.475144ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-8qaxkx" namespace
STEP: Deleting all clusters in the capz-e2e-8qaxkx namespace
STEP: Deleting cluster capz-e2e-8qaxkx-vmss
INFO: Waiting for the Cluster capz-e2e-8qaxkx/capz-e2e-8qaxkx-vmss to be deleted
STEP: Waiting for cluster capz-e2e-8qaxkx-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-8qaxkx-vmss-control-plane-x6dmj, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5k85c, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-tndnk, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-j2cdf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-8qaxkx-vmss-control-plane-x6dmj, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-8qaxkx-vmss-control-plane-x6dmj, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-8qaxkx-vmss-control-plane-x6dmj, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-t6ktv, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-m6bg4, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-8qaxkx
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 nodes" ran for 18m43s on Ginkgo node 1 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:203

INFO: "With 3 control-plane nodes and 2 worker nodes" started at Tue, 12 Apr 2022 19:39:13 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-kuu02b" for hosting the cluster
Apr 12 19:39:13.059: INFO: starting to create namespace for hosting the "capz-e2e-kuu02b" test spec
2022/04/12 19:39:13 failed trying to get namespace (capz-e2e-kuu02b):namespaces "capz-e2e-kuu02b" not found
INFO: Creating namespace capz-e2e-kuu02b
INFO: Creating event watcher for namespace "capz-e2e-kuu02b"
Apr 12 19:39:13.122: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-kuu02b-ha
INFO: Creating the workload cluster with name "capz-e2e-kuu02b-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
Apr 12 19:49:44.528: INFO: starting to delete external LB service weby6kggt-elb
Apr 12 19:49:44.625: INFO: starting to delete deployment weby6kggt
Apr 12 19:49:44.705: INFO: starting to delete job curl-to-elb-jobj4noed6azpq
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Apr 12 19:49:44.836: INFO: starting to create dev deployment namespace
2022/04/12 19:49:44 failed trying to get namespace (development):namespaces "development" not found
2022/04/12 19:49:44 namespace development does not exist, creating...
STEP: Creating production namespace
Apr 12 19:49:44.996: INFO: starting to create prod deployment namespace
2022/04/12 19:49:45 failed trying to get namespace (production):namespaces "production" not found
2022/04/12 19:49:45 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Apr 12 19:49:45.128: INFO: starting to create frontend-prod deployments
Apr 12 19:49:45.191: INFO: starting to create frontend-dev deployments
Apr 12 19:49:45.271: INFO: starting to create backend deployments
Apr 12 19:49:45.342: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Apr 12 19:50:09.368: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.90.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 12 19:52:20.367: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Apr 12 19:52:20.613: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.90.131 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.90.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 12 19:56:42.517: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Apr 12 19:56:42.754: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.90.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 12 19:58:53.584: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Apr 12 19:58:53.814: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.90.129 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.90.132 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 12 20:03:15.723: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Apr 12 20:03:15.974: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.90.131 port 80: Connection timed out

STEP: Cleaning up after ourselves
Apr 12 20:05:26.796: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Apr 12 20:05:27.045: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.90.131 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: Dumping logs from the "capz-e2e-kuu02b-ha" workload cluster
STEP: Dumping workload cluster capz-e2e-kuu02b/capz-e2e-kuu02b-ha logs
Apr 12 20:07:38.450: INFO: INFO: Collecting logs for node capz-e2e-kuu02b-ha-control-plane-tsvck in cluster capz-e2e-kuu02b-ha in namespace capz-e2e-kuu02b

Apr 12 20:07:50.690: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-kuu02b-ha-control-plane-tsvck
... skipping 39 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-mmvtj, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-gplg8, container coredns
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-kuu02b-ha-control-plane-tsvck, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-kuu02b-ha-control-plane-469jc, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-5rn5n, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-qnmj7, container calico-node
STEP: Got error while iterating over activity logs for resource group capz-e2e-kuu02b-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000642096s
STEP: Dumping all the Cluster API resources in the "capz-e2e-kuu02b" namespace
STEP: Deleting all clusters in the capz-e2e-kuu02b namespace
STEP: Deleting cluster capz-e2e-kuu02b-ha
INFO: Waiting for the Cluster capz-e2e-kuu02b/capz-e2e-kuu02b-ha to be deleted
STEP: Waiting for cluster capz-e2e-kuu02b-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vdtwg, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/calico-node-qnmj7, container calico-node: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/kube-proxy-5rn5n, container kube-proxy: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-kuu02b-ha-control-plane-7j82f, container kube-apiserver: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gplg8, container coredns: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/calico-node-bxmjp, container calico-node: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug=""
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-hdxr8, container coredns: http2: server sent GOAWAY and closed the connection; LastStreamID=113, ErrCode=NO_ERROR, debug=""
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-kuu02b
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 worker nodes" ran for 46m4s on Ginkgo node 2 of 3

... skipping 8 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141

INFO: "Creates a public management cluster in the same vnet" started at Tue, 12 Apr 2022 19:39:13 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-j95k49" for hosting the cluster
Apr 12 19:39:13.059: INFO: starting to create namespace for hosting the "capz-e2e-j95k49" test spec
2022/04/12 19:39:13 failed trying to get namespace (capz-e2e-j95k49):namespaces "capz-e2e-j95k49" not found
INFO: Creating namespace capz-e2e-j95k49
INFO: Creating event watcher for namespace "capz-e2e-j95k49"
Apr 12 19:39:13.142: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-j95k49-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-zs2x7, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-j95k49-public-custom-vnet-control-plane-nsq88, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-xb88s, container calico-node
STEP: Creating log watcher for controller kube-system/kube-proxy-bzpt9, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-rsw89, container kube-proxy
STEP: Dumping workload cluster capz-e2e-j95k49/capz-e2e-j95k49-public-custom-vnet Azure activity log
STEP: Got error while iterating over activity logs for resource group capz-e2e-j95k49-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000798719s
STEP: Dumping all the Cluster API resources in the "capz-e2e-j95k49" namespace
STEP: Deleting all clusters in the capz-e2e-j95k49 namespace
STEP: Deleting cluster capz-e2e-j95k49-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-j95k49/capz-e2e-j95k49-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-j95k49-public-custom-vnet to be deleted
W0412 20:26:42.165520   24234 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I0412 20:27:13.747789   24234 trace.go:205] Trace[758784297]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:26:43.746) (total time: 30001ms):
Trace[758784297]: [30.001252375s] [30.001252375s] END
E0412 20:27:13.747912   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout
I0412 20:27:45.793513   24234 trace.go:205] Trace[184497856]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:27:15.792) (total time: 30001ms):
Trace[184497856]: [30.001261134s] [30.001261134s] END
E0412 20:27:45.793579   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout
I0412 20:28:20.567605   24234 trace.go:205] Trace[911592206]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:27:50.565) (total time: 30002ms):
Trace[911592206]: [30.002295413s] [30.002295413s] END
E0412 20:28:20.567683   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout
I0412 20:28:59.301282   24234 trace.go:205] Trace[2105401631]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:28:29.299) (total time: 30001ms):
Trace[2105401631]: [30.001317973s] [30.001317973s] END
E0412 20:28:59.301348   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout
I0412 20:29:45.663076   24234 trace.go:205] Trace[462677896]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:29:15.660) (total time: 30002ms):
Trace[462677896]: [30.002843449s] [30.002843449s] END
E0412 20:29:45.663144   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout
I0412 20:30:45.395724   24234 trace.go:205] Trace[180741923]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:30:15.394) (total time: 30001ms):
Trace[180741923]: [30.001233663s] [30.001233663s] END
E0412 20:30:45.395911   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout
I0412 20:31:54.408443   24234 trace.go:205] Trace[1560766274]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (12-Apr-2022 20:31:24.406) (total time: 30001ms):
Trace[1560766274]: [30.001478044s] [30.001478044s] END
E0412 20:31:54.408523   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp 20.69.119.31:6443: i/o timeout
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-j95k49
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Apr 12 20:32:12.984: INFO: deleting an existing virtual network "custom-vnet"
Apr 12 20:32:23.533: INFO: deleting an existing route table "node-routetable"
Apr 12 20:32:25.893: INFO: deleting an existing network security group "node-nsg"
E0412 20:32:29.871710   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Apr 12 20:32:36.284: INFO: deleting an existing network security group "control-plane-nsg"
Apr 12 20:32:46.658: INFO: verifying the existing resource group "capz-e2e-j95k49-public-custom-vnet" is empty
Apr 12 20:32:46.706: INFO: deleting the existing resource group "capz-e2e-j95k49-public-custom-vnet"
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0412 20:33:21.598054   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:34:12.460738   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 55m54s on Ginkgo node 3 of 3


• [SLOW TEST:3353.962 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:377

INFO: "with a single control plane node and 1 node" started at Tue, 12 Apr 2022 20:15:34 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-etnhfp" for hosting the cluster
Apr 12 20:15:34.974: INFO: starting to create namespace for hosting the "capz-e2e-etnhfp" test spec
2022/04/12 20:15:34 failed trying to get namespace (capz-e2e-etnhfp):namespaces "capz-e2e-etnhfp" not found
INFO: Creating namespace capz-e2e-etnhfp
INFO: Creating event watcher for namespace "capz-e2e-etnhfp"
Apr 12 20:15:35.013: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-etnhfp-gpu
INFO: Creating the workload cluster with name "capz-e2e-etnhfp-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 515.708403ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-etnhfp" namespace
STEP: Deleting all clusters in the capz-e2e-etnhfp namespace
STEP: Deleting cluster capz-e2e-etnhfp-gpu
INFO: Waiting for the Cluster capz-e2e-etnhfp/capz-e2e-etnhfp-gpu to be deleted
STEP: Waiting for cluster capz-e2e-etnhfp-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-etnhfp-gpu-control-plane-wx6l8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-52fph, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lb9lf, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gjt5h, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-etnhfp-gpu-control-plane-wx6l8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-bcmpm, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-etnhfp-gpu-control-plane-wx6l8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-etnhfp-gpu-control-plane-wx6l8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-fxwlt, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-etnhfp
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 21m23s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:419

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Tue, 12 Apr 2022 20:25:16 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-lkcps8" for hosting the cluster
Apr 12 20:25:16.920: INFO: starting to create namespace for hosting the "capz-e2e-lkcps8" test spec
2022/04/12 20:25:16 failed trying to get namespace (capz-e2e-lkcps8):namespaces "capz-e2e-lkcps8" not found
INFO: Creating namespace capz-e2e-lkcps8
INFO: Creating event watcher for namespace "capz-e2e-lkcps8"
Apr 12 20:25:16.965: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-lkcps8-oot
INFO: Creating the workload cluster with name "capz-e2e-lkcps8-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 120 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:454

INFO: "with a single control plane node and 1 node" started at Tue, 12 Apr 2022 20:35:07 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-s6nmj2" for hosting the cluster
Apr 12 20:35:07.025: INFO: starting to create namespace for hosting the "capz-e2e-s6nmj2" test spec
2022/04/12 20:35:07 failed trying to get namespace (capz-e2e-s6nmj2):namespaces "capz-e2e-s6nmj2" not found
INFO: Creating namespace capz-e2e-s6nmj2
INFO: Creating event watcher for namespace "capz-e2e-s6nmj2"
Apr 12 20:35:07.078: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-s6nmj2-aks
E0412 20:35:07.417888   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Creating the workload cluster with name "capz-e2e-s6nmj2-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster capz-e2e-s6nmj2-aks --infrastructure (default) --kubernetes-version v1.22.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor aks-multi-tenancy
INFO: Applying the cluster template yaml to the cluster
cluster.cluster.x-k8s.io/capz-e2e-s6nmj2-aks created
azuremanagedcontrolplane.infrastructure.cluster.x-k8s.io/capz-e2e-s6nmj2-aks created
... skipping 3 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E0412 20:35:59.646227   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:36:49.077771   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:37:34.892552   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:38:10.294466   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:38:41.215671   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:39:12.970447   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Apr 12 20:39:39.481: INFO: Waiting for the first control plane machine managed by capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E0412 20:39:46.168354   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:40:41.610254   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:41:27.238950   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:42:24.710127   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:43:04.427298   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:43:35.415432   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
Apr 12 20:43:39.811: INFO: Waiting for the first control plane machine managed by capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
... skipping 10 lines ...
STEP: time sync OK for host aks-agentpool1-14821083-vmss000000
STEP: time sync OK for host aks-agentpool1-14821083-vmss000000
STEP: Dumping logs from the "capz-e2e-s6nmj2-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks logs
Apr 12 20:43:47.097: INFO: INFO: Collecting logs for node aks-agentpool1-14821083-vmss000000 in cluster capz-e2e-s6nmj2-aks in namespace capz-e2e-s6nmj2

E0412 20:44:22.454424   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:45:05.197812   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:45:35.753555   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Apr 12 20:45:56.460: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks: [dialing public load balancer at capz-e2e-s6nmj2-aks-e9fe8e51.hcp.westus2.azmk8s.io: dial tcp 52.156.149.48:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Apr 12 20:45:57.158: INFO: INFO: Collecting logs for node aks-agentpool1-14821083-vmss000000 in cluster capz-e2e-s6nmj2-aks in namespace capz-e2e-s6nmj2

E0412 20:46:10.429473   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:46:51.571082   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:47:38.380062   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
Apr 12 20:48:07.532: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks: [dialing public load balancer at capz-e2e-s6nmj2-aks-e9fe8e51.hcp.westus2.azmk8s.io: dial tcp 52.156.149.48:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 636.629571ms
STEP: Dumping workload cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks Azure activity log
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-xkhmv, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-vzhdb, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-xkhmv, container node-driver-registrar
... skipping 20 lines ...
STEP: Fetching activity logs took 576.135741ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-s6nmj2" namespace
STEP: Deleting all clusters in the capz-e2e-s6nmj2 namespace
STEP: Deleting cluster capz-e2e-s6nmj2-aks
INFO: Waiting for the Cluster capz-e2e-s6nmj2/capz-e2e-s6nmj2-aks to be deleted
STEP: Waiting for cluster capz-e2e-s6nmj2-aks to be deleted
E0412 20:48:30.664426   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:49:26.751279   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:50:04.130063   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:50:45.473746   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:51:43.478677   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:52:19.798139   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:52:59.854145   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:53:58.665772   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:54:54.247932   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:55:32.432819   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:56:11.149761   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-s6nmj2
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E0412 20:56:43.856409   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:57:40.081694   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
E0412 20:58:21.650297   24234 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-j95k49/events?resourceVersion=8912": dial tcp: lookup capz-e2e-j95k49-public-custom-vnet-2318b8f.westus2.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 23m16s on Ginkgo node 3 of 3


• [SLOW TEST:1395.643 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:496

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Tue, 12 Apr 2022 20:36:57 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-giqinb" for hosting the cluster
Apr 12 20:36:57.915: INFO: starting to create namespace for hosting the "capz-e2e-giqinb" test spec
2022/04/12 20:36:57 failed trying to get namespace (capz-e2e-giqinb):namespaces "capz-e2e-giqinb" not found
INFO: Creating namespace capz-e2e-giqinb
INFO: Creating event watcher for namespace "capz-e2e-giqinb"
Apr 12 20:36:57.951: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-giqinb-win-ha
INFO: Creating the workload cluster with name "capz-e2e-giqinb-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Fetching activity logs took 928.164974ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-giqinb" namespace
STEP: Deleting all clusters in the capz-e2e-giqinb namespace
STEP: Deleting cluster capz-e2e-giqinb-win-ha
INFO: Waiting for the Cluster capz-e2e-giqinb/capz-e2e-giqinb-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-giqinb-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-giqinb-win-ha-control-plane-pnkc6, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jqgh8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-giqinb-win-ha-control-plane-l5kdv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-giqinb-win-ha-control-plane-pnkc6, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-m4989, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-giqinb-win-ha-control-plane-pnkc6, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-giqinb-win-ha-control-plane-l5kdv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-giqinb-win-ha-control-plane-l5kdv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-giqinb-win-ha-control-plane-pnkc6, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-x6kh8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-59mn2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-giqinb-win-ha-control-plane-l5kdv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-4cx22, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-rmv9b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-mbbkp, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-giqinb
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 24m0s on Ginkgo node 1 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:543

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Tue, 12 Apr 2022 20:47:38 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-p1kjcf" for hosting the cluster
Apr 12 20:47:38.554: INFO: starting to create namespace for hosting the "capz-e2e-p1kjcf" test spec
2022/04/12 20:47:38 failed trying to get namespace (capz-e2e-p1kjcf):namespaces "capz-e2e-p1kjcf" not found
INFO: Creating namespace capz-e2e-p1kjcf
INFO: Creating event watcher for namespace "capz-e2e-p1kjcf"
Apr 12 20:47:38.606: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-p1kjcf-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-p1kjcf-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 48 lines ...
STEP: Fetching activity logs took 1.081303607s
STEP: Dumping all the Cluster API resources in the "capz-e2e-p1kjcf" namespace
STEP: Deleting all clusters in the capz-e2e-p1kjcf namespace
STEP: Deleting cluster capz-e2e-p1kjcf-win-vmss
INFO: Waiting for the Cluster capz-e2e-p1kjcf/capz-e2e-p1kjcf-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-p1kjcf-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xlx2v, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-zwlxh, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-p1kjcf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 49m46s on Ginkgo node 2 of 3

... skipping 55 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a Windows enabled VMSS cluster [It] with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/machinepool_helpers.go:85

Ran 9 of 22 Specs in 7209.647 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 13 Skipped


Ginkgo ran 1 suite in 2h1m38.930926406s
Test Suite Failed
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...