This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 6 succeeded
Started2021-11-13 18:33
Elapsed2h15m
Revisionmain

No Test Failures!


Show 6 Passed Tests

Show 14 Skipped Tests

Error lines from build-log.txt

... skipping 425 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:288

INFO: "With ipv6 worker node" started at Sat, 13 Nov 2021 18:40:21 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-hztn9q" for hosting the cluster
Nov 13 18:40:21.788: INFO: starting to create namespace for hosting the "capz-e2e-hztn9q" test spec
2021/11/13 18:40:21 failed trying to get namespace (capz-e2e-hztn9q):namespaces "capz-e2e-hztn9q" not found
INFO: Creating namespace capz-e2e-hztn9q
INFO: Creating event watcher for namespace "capz-e2e-hztn9q"
Nov 13 18:40:21.868: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-hztn9q-ipv6
INFO: Creating the workload cluster with name "capz-e2e-hztn9q-ipv6" using the "ipv6" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 93 lines ...
STEP: Fetching activity logs took 574.082759ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-hztn9q" namespace
STEP: Deleting all clusters in the capz-e2e-hztn9q namespace
STEP: Deleting cluster capz-e2e-hztn9q-ipv6
INFO: Waiting for the Cluster capz-e2e-hztn9q/capz-e2e-hztn9q-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-hztn9q-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-pntb2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6h5j9, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-hztn9q-ipv6-control-plane-jds2n, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-hztn9q-ipv6-control-plane-jds2n, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-t7zfp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-q8z4r, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-hztn9q-ipv6-control-plane-vckbf, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-pprbd, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-hztn9q-ipv6-control-plane-jds2n, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-hztn9q-ipv6-control-plane-vckbf, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-hztn9q-ipv6-control-plane-jds2n, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-hztn9q-ipv6-control-plane-vckbf, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-q8g8b, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-hztn9q-ipv6-control-plane-vckbf, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-n8hp8, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-hztn9q
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 18m42s on Ginkgo node 3 of 3

... skipping 10 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Sat, 13 Nov 2021 18:40:21 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-mdvg8d" for hosting the cluster
Nov 13 18:40:21.787: INFO: starting to create namespace for hosting the "capz-e2e-mdvg8d" test spec
2021/11/13 18:40:21 failed trying to get namespace (capz-e2e-mdvg8d):namespaces "capz-e2e-mdvg8d" not found
INFO: Creating namespace capz-e2e-mdvg8d
INFO: Creating event watcher for namespace "capz-e2e-mdvg8d"
Nov 13 18:40:21.898: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-mdvg8d-ha
INFO: Creating the workload cluster with name "capz-e2e-mdvg8d-ha" using the "(default)" template (Kubernetes v1.22.1, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 67 lines ...
STEP: waiting for job default/curl-to-elb-job0e2j7gm1lhf to be complete
Nov 13 18:49:49.881: INFO: waiting for job default/curl-to-elb-job0e2j7gm1lhf to be complete
Nov 13 18:50:00.092: INFO: job default/curl-to-elb-job0e2j7gm1lhf is complete, took 10.211403461s
STEP: connecting directly to the external LB service
Nov 13 18:50:00.092: INFO: starting attempts to connect directly to the external LB service
2021/11/13 18:50:00 [DEBUG] GET http://20.82.199.135
2021/11/13 18:50:30 [ERR] GET http://20.82.199.135 request failed: Get "http://20.82.199.135": dial tcp 20.82.199.135:80: i/o timeout
2021/11/13 18:50:30 [DEBUG] GET http://20.82.199.135: retrying in 1s (4 left)
Nov 13 18:50:31.298: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 13 18:50:31.298: INFO: starting to delete external LB service webac27qx-elb
Nov 13 18:50:31.453: INFO: starting to delete deployment webac27qx
Nov 13 18:50:31.561: INFO: starting to delete job curl-to-elb-job0e2j7gm1lhf
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Nov 13 18:50:31.722: INFO: starting to create dev deployment namespace
2021/11/13 18:50:31 failed trying to get namespace (development):namespaces "development" not found
2021/11/13 18:50:31 namespace development does not exist, creating...
STEP: Creating production namespace
Nov 13 18:50:31.937: INFO: starting to create prod deployment namespace
2021/11/13 18:50:32 failed trying to get namespace (production):namespaces "production" not found
2021/11/13 18:50:32 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Nov 13 18:50:32.148: INFO: starting to create frontend-prod deployments
Nov 13 18:50:32.256: INFO: starting to create frontend-dev deployments
Nov 13 18:50:32.365: INFO: starting to create backend deployments
Nov 13 18:50:32.473: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Nov 13 18:50:58.998: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.252.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 13 18:53:09.330: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Nov 13 18:53:09.708: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.252.2 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.252.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 13 18:57:31.240: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Nov 13 18:57:31.615: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.252.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 13 18:59:44.600: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Nov 13 18:59:44.972: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.252.1 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.252.3 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 13 19:04:08.843: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Nov 13 19:04:09.214: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.252.2 port 80: Connection timed out

STEP: Cleaning up after ourselves
Nov 13 19:06:21.907: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Nov 13 19:06:22.280: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.252.2 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windowsovocax to be available
Nov 13 19:08:34.480: INFO: starting to wait for deployment to become available
Nov 13 19:09:25.151: INFO: Deployment default/web-windowsovocax is now available, took 50.670482081s
... skipping 51 lines ...
Nov 13 19:11:33.797: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-mdvg8d-ha-md-0-fvsln

Nov 13 19:11:34.217: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster capz-e2e-mdvg8d-ha in namespace capz-e2e-mdvg8d

Nov 13 19:12:04.374: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-mdvg8d-ha-md-win-5vn54

Failed to get logs for machine capz-e2e-mdvg8d-ha-md-win-5db6fbb448-2dvrd, cluster capz-e2e-mdvg8d/capz-e2e-mdvg8d-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
Nov 13 19:12:04.772: INFO: INFO: Collecting logs for node 10.1.0.7 in cluster capz-e2e-mdvg8d-ha in namespace capz-e2e-mdvg8d

Nov 13 19:12:39.797: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-mdvg8d-ha-md-win-rvm9q

Failed to get logs for machine capz-e2e-mdvg8d-ha-md-win-5db6fbb448-xmx8b, cluster capz-e2e-mdvg8d/capz-e2e-mdvg8d-ha: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-mdvg8d/capz-e2e-mdvg8d-ha kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-ngx8r, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-phdhf, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-jmw5w, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-mdvg8d-ha-control-plane-k7lq8, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-kjc9m, container kube-proxy
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-khqqs, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-ks6fc, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-windows-bc644, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-mdvg8d-ha-control-plane-k7lq8, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-windows-jkp54, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-windows-jkp54, container calico-node-felix
STEP: Got error while iterating over activity logs for resource group capz-e2e-mdvg8d-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000228612s
STEP: Dumping all the Cluster API resources in the "capz-e2e-mdvg8d" namespace
STEP: Deleting all clusters in the capz-e2e-mdvg8d namespace
STEP: Deleting cluster capz-e2e-mdvg8d-ha
INFO: Waiting for the Cluster capz-e2e-mdvg8d/capz-e2e-mdvg8d-ha to be deleted
STEP: Waiting for cluster capz-e2e-mdvg8d-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jkp54, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-jkp54, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-ngx8r, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-sbz58, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-b2kf2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-4g78z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w88px, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-78kf8, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-bc644, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-mdvg8d-ha-control-plane-nkpqm, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-mdvg8d-ha-control-plane-nkpqm, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-sldf2, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-mdvg8d-ha-control-plane-nkpqm, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-khqqs, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-mdvg8d-ha-control-plane-nkpqm, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-bc644, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-mdvg8d
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 40m37s on Ginkgo node 2 of 3

... skipping 8 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Sat, 13 Nov 2021 18:59:03 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-iq0nx0" for hosting the cluster
Nov 13 18:59:03.429: INFO: starting to create namespace for hosting the "capz-e2e-iq0nx0" test spec
2021/11/13 18:59:03 failed trying to get namespace (capz-e2e-iq0nx0):namespaces "capz-e2e-iq0nx0" not found
INFO: Creating namespace capz-e2e-iq0nx0
INFO: Creating event watcher for namespace "capz-e2e-iq0nx0"
Nov 13 18:59:03.485: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-iq0nx0-vmss
INFO: Creating the workload cluster with name "capz-e2e-iq0nx0-vmss" using the "machine-pool" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 98 lines ...
STEP: waiting for job default/curl-to-elb-jobdal39typtlo to be complete
Nov 13 19:16:22.997: INFO: waiting for job default/curl-to-elb-jobdal39typtlo to be complete
Nov 13 19:16:33.202: INFO: job default/curl-to-elb-jobdal39typtlo is complete, took 10.205261643s
STEP: connecting directly to the external LB service
Nov 13 19:16:33.202: INFO: starting attempts to connect directly to the external LB service
2021/11/13 19:16:33 [DEBUG] GET http://20.67.150.107
2021/11/13 19:17:03 [ERR] GET http://20.67.150.107 request failed: Get "http://20.67.150.107": dial tcp 20.67.150.107:80: i/o timeout
2021/11/13 19:17:03 [DEBUG] GET http://20.67.150.107: retrying in 1s (4 left)
Nov 13 19:17:04.415: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 13 19:17:04.415: INFO: starting to delete external LB service web-windowsrbzah1-elb
Nov 13 19:17:04.545: INFO: starting to delete deployment web-windowsrbzah1
Nov 13 19:17:04.649: INFO: starting to delete job curl-to-elb-jobdal39typtlo
... skipping 33 lines ...
Nov 13 19:21:14.034: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-iq0nx0-vmss-mp-0

Nov 13 19:21:14.563: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-iq0nx0-vmss in namespace capz-e2e-iq0nx0

Nov 13 19:21:32.943: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-iq0nx0-vmss-mp-0

Failed to get logs for machine pool capz-e2e-iq0nx0-vmss-mp-0, cluster capz-e2e-iq0nx0/capz-e2e-iq0nx0-vmss: [[running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1], [running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1]]
Nov 13 19:21:33.363: INFO: INFO: Collecting logs for node win-p-win000000 in cluster capz-e2e-iq0nx0-vmss in namespace capz-e2e-iq0nx0

Nov 13 19:22:06.868: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Nov 13 19:22:07.271: INFO: INFO: Collecting logs for node win-p-win000001 in cluster capz-e2e-iq0nx0-vmss in namespace capz-e2e-iq0nx0

Nov 13 19:22:54.576: INFO: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-iq0nx0-vmss-mp-win, cluster capz-e2e-iq0nx0/capz-e2e-iq0nx0-vmss: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-iq0nx0/capz-e2e-iq0nx0-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 1.044668531s
STEP: Dumping workload cluster capz-e2e-iq0nx0/capz-e2e-iq0nx0-vmss Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-w98l4, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-windows-mnss2, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-gxm8d, container kube-proxy
... skipping 10 lines ...
STEP: Creating log watcher for controller kube-system/calico-node-c4hjq, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-k7pqz, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-n7v5t, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-mxgmj, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-mnss2, container calico-node-felix
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-iq0nx0-vmss-control-plane-kzzxd, container kube-controller-manager
STEP: Got error while iterating over activity logs for resource group capz-e2e-iq0nx0-vmss: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000534301s
STEP: Dumping all the Cluster API resources in the "capz-e2e-iq0nx0" namespace
STEP: Deleting all clusters in the capz-e2e-iq0nx0 namespace
STEP: Deleting cluster capz-e2e-iq0nx0-vmss
INFO: Waiting for the Cluster capz-e2e-iq0nx0/capz-e2e-iq0nx0-vmss to be deleted
STEP: Waiting for cluster capz-e2e-iq0nx0-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g26mt, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-n7v5t, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mnss2, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-iq0nx0-vmss-control-plane-kzzxd, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-mnss2, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xf49m, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-7qq2s, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-iq0nx0-vmss-control-plane-kzzxd, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mxgmj, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-iq0nx0-vmss-control-plane-kzzxd, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-pmg28, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w98l4, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-n47rz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-iq0nx0-vmss-control-plane-kzzxd, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-7qq2s, container calico-node-startup: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-iq0nx0
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 33m45s on Ginkgo node 3 of 3

... skipping 10 lines ...
  Creates a public management cluster in the same vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:144

INFO: "Creates a public management cluster in the same vnet" started at Sat, 13 Nov 2021 18:40:21 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-w4fs54" for hosting the cluster
Nov 13 18:40:21.761: INFO: starting to create namespace for hosting the "capz-e2e-w4fs54" test spec
2021/11/13 18:40:21 failed trying to get namespace (capz-e2e-w4fs54):namespaces "capz-e2e-w4fs54" not found
INFO: Creating namespace capz-e2e-w4fs54
INFO: Creating event watcher for namespace "capz-e2e-w4fs54"
Nov 13 18:40:21.805: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-w4fs54-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 100 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-w4fs54-public-custom-vnet-control-plane-6cpmc, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-54pnf, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-xjp8p, container calico-node
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-w4fs54-public-custom-vnet-control-plane-6cpmc, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-5c2d7, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-w4fs54-public-custom-vnet-control-plane-6cpmc, container kube-apiserver
STEP: Got error while iterating over activity logs for resource group capz-e2e-w4fs54-public-custom-vnet: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000835432s
STEP: Dumping all the Cluster API resources in the "capz-e2e-w4fs54" namespace
STEP: Deleting all clusters in the capz-e2e-w4fs54 namespace
STEP: Deleting cluster capz-e2e-w4fs54-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-w4fs54/capz-e2e-w4fs54-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-w4fs54-public-custom-vnet to be deleted
W1113 19:25:57.940493   24458 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
I1113 19:26:29.096378   24458 trace.go:205] Trace[104148717]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (13-Nov-2021 19:25:59.094) (total time: 30001ms):
Trace[104148717]: [30.001409587s] [30.001409587s] END
E1113 19:26:29.096450   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp 20.67.165.78:6443: i/o timeout
I1113 19:27:01.807164   24458 trace.go:205] Trace[1959929135]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (13-Nov-2021 19:26:31.806) (total time: 30000ms):
Trace[1959929135]: [30.00065243s] [30.00065243s] END
E1113 19:27:01.807226   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp 20.67.165.78:6443: i/o timeout
I1113 19:27:37.098933   24458 trace.go:205] Trace[1556332242]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (13-Nov-2021 19:27:07.097) (total time: 30001ms):
Trace[1556332242]: [30.001132106s] [30.001132106s] END
E1113 19:27:37.098997   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp 20.67.165.78:6443: i/o timeout
I1113 19:28:17.481312   24458 trace.go:205] Trace[334361983]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167 (13-Nov-2021 19:27:47.480) (total time: 30001ms):
Trace[334361983]: [30.001208696s] [30.001208696s] END
E1113 19:28:17.481396   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp 20.67.165.78:6443: i/o timeout
E1113 19:28:36.971387   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-w4fs54
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Nov 13 19:29:06.848: INFO: deleting an existing virtual network "custom-vnet"
E1113 19:29:15.426671   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 13 19:29:17.822: INFO: deleting an existing route table "node-routetable"
Nov 13 19:29:28.946: INFO: deleting an existing network security group "node-nsg"
Nov 13 19:29:39.771: INFO: deleting an existing network security group "control-plane-nsg"
Nov 13 19:29:50.542: INFO: verifying the existing resource group "capz-e2e-w4fs54-public-custom-vnet" is empty
E1113 19:29:51.463474   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 13 19:29:54.882: INFO: deleting the existing resource group "capz-e2e-w4fs54-public-custom-vnet"
E1113 19:30:37.553270   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:31:23.626676   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:31:55.143596   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1113 19:32:31.085671   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:33:16.721969   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in the same vnet" ran for 53m26s on Ginkgo node 1 of 3


• [SLOW TEST:3206.016 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:413

INFO: "with a single control plane node and 1 node" started at Sat, 13 Nov 2021 19:20:58 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-wo9puf" for hosting the cluster
Nov 13 19:20:58.676: INFO: starting to create namespace for hosting the "capz-e2e-wo9puf" test spec
2021/11/13 19:20:58 failed trying to get namespace (capz-e2e-wo9puf):namespaces "capz-e2e-wo9puf" not found
INFO: Creating namespace capz-e2e-wo9puf
INFO: Creating event watcher for namespace "capz-e2e-wo9puf"
Nov 13 19:20:58.716: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-wo9puf-gpu
INFO: Creating the workload cluster with name "capz-e2e-wo9puf-gpu" using the "nvidia-gpu" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 58 lines ...
STEP: Fetching activity logs took 542.88713ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-wo9puf" namespace
STEP: Deleting all clusters in the capz-e2e-wo9puf namespace
STEP: Deleting cluster capz-e2e-wo9puf-gpu
INFO: Waiting for the Cluster capz-e2e-wo9puf/capz-e2e-wo9puf-gpu to be deleted
STEP: Waiting for cluster capz-e2e-wo9puf-gpu to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-wo9puf-gpu-control-plane-8l7cz, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-wo9puf-gpu-control-plane-8l7cz, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-gskcf, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-wo9puf-gpu-control-plane-8l7cz, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-btdvb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-twqrb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2h7pz, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-xkr65, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-wo9puf-gpu-control-plane-8l7cz, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-7fq4t, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-xkmp6, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-wo9puf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and 1 node" ran for 20m24s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:490

INFO: "with a single control plane node and 1 node" started at Sat, 13 Nov 2021 19:33:47 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-2508ed" for hosting the cluster
Nov 13 19:33:47.783: INFO: starting to create namespace for hosting the "capz-e2e-2508ed" test spec
2021/11/13 19:33:47 failed trying to get namespace (capz-e2e-2508ed):namespaces "capz-e2e-2508ed" not found
INFO: Creating namespace capz-e2e-2508ed
INFO: Creating event watcher for namespace "capz-e2e-2508ed"
Nov 13 19:33:47.817: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-2508ed-aks
INFO: Creating the workload cluster with name "capz-e2e-2508ed-aks" using the "aks-multi-tenancy" template (Kubernetes v1.19.13, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 7 lines ...
machinepool.cluster.x-k8s.io/agentpool1 created
azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created
azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created

INFO: Waiting for the cluster infrastructure to be provisioned
STEP: Waiting for cluster to enter the provisioned phase
E1113 19:34:08.329904   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:34:38.658937   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:35:21.001054   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:36:03.771500   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:36:53.561242   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:37:42.246458   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be initialized
Nov 13 19:38:29.425: INFO: Waiting for the first control plane machine managed by capz-e2e-2508ed/capz-e2e-2508ed-aks to be provisioned
STEP: Waiting for atleast one control plane node to exist
E1113 19:38:32.308214   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: Waiting for control plane to be ready
Nov 13 19:38:49.463: INFO: Waiting for the first control plane machine managed by capz-e2e-2508ed/capz-e2e-2508ed-aks to be provisioned
STEP: Waiting for all control plane nodes to exist
INFO: Waiting for the machine deployments to be provisioned
INFO: Waiting for the machine pools to be provisioned
STEP: Waiting for the machine pool workload nodes to exist
... skipping 10 lines ...
STEP: time sync OK for host aks-agentpool1-17546561-vmss000000
STEP: time sync OK for host aks-agentpool1-17546561-vmss000000
STEP: Dumping logs from the "capz-e2e-2508ed-aks" workload cluster
STEP: Dumping workload cluster capz-e2e-2508ed/capz-e2e-2508ed-aks logs
Nov 13 19:38:58.570: INFO: INFO: Collecting logs for node aks-agentpool1-17546561-vmss000000 in cluster capz-e2e-2508ed-aks in namespace capz-e2e-2508ed

E1113 19:39:27.134970   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:40:16.452535   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:41:03.682138   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 13 19:41:09.733: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool0, cluster capz-e2e-2508ed/capz-e2e-2508ed-aks: [dialing public load balancer at capz-e2e-2508ed-aks-7b5c5f98.hcp.northeurope.azmk8s.io: dial tcp 20.67.218.62:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
Nov 13 19:41:11.004: INFO: INFO: Collecting logs for node aks-agentpool1-17546561-vmss000000 in cluster capz-e2e-2508ed-aks in namespace capz-e2e-2508ed

E1113 19:41:50.456929   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:42:46.500255   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:43:20.262797   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Nov 13 19:43:20.809: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0

Failed to get logs for machine pool agentpool1, cluster capz-e2e-2508ed/capz-e2e-2508ed-aks: [dialing public load balancer at capz-e2e-2508ed-aks-7b5c5f98.hcp.northeurope.azmk8s.io: dial tcp 20.67.218.62:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."]
STEP: Dumping workload cluster capz-e2e-2508ed/capz-e2e-2508ed-aks kube-system pod logs
STEP: Fetching kube-system pod logs took 983.143936ms
STEP: Dumping workload cluster capz-e2e-2508ed/capz-e2e-2508ed-aks Azure activity log
STEP: Creating log watcher for controller kube-system/calico-typha-deployment-76cb9744d8-csm4l, container calico-typha
STEP: Creating log watcher for controller kube-system/coredns-autoscaler-54d55c8b75-ffs7c, container autoscaler
STEP: Creating log watcher for controller kube-system/kube-proxy-ptt95, container kube-proxy
... skipping 8 lines ...
STEP: Fetching activity logs took 573.273403ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-2508ed" namespace
STEP: Deleting all clusters in the capz-e2e-2508ed namespace
STEP: Deleting cluster capz-e2e-2508ed-aks
INFO: Waiting for the Cluster capz-e2e-2508ed/capz-e2e-2508ed-aks to be deleted
STEP: Waiting for cluster capz-e2e-2508ed-aks to be deleted
E1113 19:44:03.869499   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:44:53.716915   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:45:27.261818   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:46:23.402232   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:47:17.233987   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E1113 19:48:01.780276   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-2508ed
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
E1113 19:48:50.196728   24458 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-w4fs54/events?resourceVersion=9771": dial tcp: lookup capz-e2e-w4fs54-public-custom-vnet-1fe1ee65.northeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "with a single control plane node and 1 node" ran for 15m33s on Ginkgo node 1 of 3


• [SLOW TEST:933.234 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
... skipping 6 lines ...
  with a 1 control plane nodes and 2 worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:455

INFO: "with a 1 control plane nodes and 2 worker nodes" started at Sat, 13 Nov 2021 19:32:48 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-rwrleq" for hosting the cluster
Nov 13 19:32:48.474: INFO: starting to create namespace for hosting the "capz-e2e-rwrleq" test spec
2021/11/13 19:32:48 failed trying to get namespace (capz-e2e-rwrleq):namespaces "capz-e2e-rwrleq" not found
INFO: Creating namespace capz-e2e-rwrleq
INFO: Creating event watcher for namespace "capz-e2e-rwrleq"
Nov 13 19:32:48.514: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-rwrleq-oot
INFO: Creating the workload cluster with name "capz-e2e-rwrleq-oot" using the "external-cloud-provider" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 53 lines ...
STEP: waiting for job default/curl-to-elb-jobm8npc7mnndd to be complete
Nov 13 19:41:44.700: INFO: waiting for job default/curl-to-elb-jobm8npc7mnndd to be complete
Nov 13 19:41:54.908: INFO: job default/curl-to-elb-jobm8npc7mnndd is complete, took 10.207472758s
STEP: connecting directly to the external LB service
Nov 13 19:41:54.908: INFO: starting attempts to connect directly to the external LB service
2021/11/13 19:41:54 [DEBUG] GET http://20.82.254.167
2021/11/13 19:42:24 [ERR] GET http://20.82.254.167 request failed: Get "http://20.82.254.167": dial tcp 20.82.254.167:80: i/o timeout
2021/11/13 19:42:24 [DEBUG] GET http://20.82.254.167: retrying in 1s (4 left)
Nov 13 19:42:26.111: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Nov 13 19:42:26.111: INFO: starting to delete external LB service webeyyt9g-elb
Nov 13 19:42:26.238: INFO: starting to delete deployment webeyyt9g
Nov 13 19:42:26.344: INFO: starting to delete job curl-to-elb-jobm8npc7mnndd
... skipping 34 lines ...
STEP: Fetching activity logs took 525.590848ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-rwrleq" namespace
STEP: Deleting all clusters in the capz-e2e-rwrleq namespace
STEP: Deleting cluster capz-e2e-rwrleq-oot
INFO: Waiting for the Cluster capz-e2e-rwrleq/capz-e2e-rwrleq-oot to be deleted
STEP: Waiting for cluster capz-e2e-rwrleq-oot to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6sb4f, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nndcd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-lwpfn, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/cloud-node-manager-xq9pz, container cloud-node-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-blss7, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-846b5f484d-8k67x, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-fw6kh, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8s9s5, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-rwrleq
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a 1 control plane nodes and 2 worker nodes" ran for 18m17s on Ginkgo node 3 of 3

... skipping 12 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Sat, 13 Nov 2021 19:41:23 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-f7lox6" for hosting the cluster
Nov 13 19:41:23.149: INFO: starting to create namespace for hosting the "capz-e2e-f7lox6" test spec
2021/11/13 19:41:23 failed trying to get namespace (capz-e2e-f7lox6):namespaces "capz-e2e-f7lox6" not found
INFO: Creating namespace capz-e2e-f7lox6
INFO: Creating event watcher for namespace "capz-e2e-f7lox6"
Nov 13 19:41:23.180: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-f7lox6-win-ha
INFO: Creating the workload cluster with name "capz-e2e-f7lox6-win-ha" using the "windows" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-qg5qn, container kube-flannel
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-4kxqk, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-f7lox6-win-ha-control-plane-8cn8z, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-f7lox6-win-ha-control-plane-bpqrx, container kube-scheduler
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-f7lox6-win-ha-control-plane-hl4qs, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-f7lox6-win-ha-control-plane-hl4qs, container kube-scheduler
STEP: Got error while iterating over activity logs for resource group capz-e2e-f7lox6-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000476784s
STEP: Dumping all the Cluster API resources in the "capz-e2e-f7lox6" namespace
STEP: Deleting all clusters in the capz-e2e-f7lox6 namespace
STEP: Deleting cluster capz-e2e-f7lox6-win-ha
INFO: Waiting for the Cluster capz-e2e-f7lox6/capz-e2e-f7lox6-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-f7lox6-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-vxwgh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-r22f2, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-f7lox6-win-ha-control-plane-8cn8z, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-f7lox6-win-ha-control-plane-bpqrx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-f7lox6-win-ha-control-plane-8cn8z, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-f7lox6-win-ha-control-plane-bpqrx, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-gx5fp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-qg5qn, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-fz75z, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-9k7w6, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-f7lox6-win-ha-control-plane-8cn8z, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-l49qj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-f7lox6-win-ha-control-plane-bpqrx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-f7lox6-win-ha-control-plane-bpqrx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-pn5nf, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-f7lox6-win-ha-control-plane-8cn8z, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-4kxqk, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-f7lox6
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 36m3s on Ginkgo node 2 of 3

... skipping 3 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a Windows Enabled cluster with dockershim
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:530
    With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2021-11-13T20:33:20Z"}
++ early_exit_handler
++ '[' -n 166 ']'
++ kill -TERM 166
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2021-11-13T20:48:20Z"}
{"component":"entrypoint","error":"os: process already finished","file":"prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2021-11-13T20:48:20Z"}