This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 6 succeeded
Started2023-01-01 21:02
Elapsed4h15m
Revisionrelease-1.6

No Test Failures!


Show 6 Passed Tests

Show 16 Skipped Tests

Error lines from build-log.txt

... skipping 599 lines ...
STEP: Creating log watcher for controller kube-system/coredns-bd6b6df9f-72dj9, container coredns
STEP: Collecting events for Pod kube-system/kube-proxy-gv868
STEP: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-0r623x-control-plane-4tvc7
STEP: Collecting events for Pod kube-system/kube-proxy-fv76w
STEP: Creating log watcher for controller kube-system/kube-proxy-gv868, container kube-proxy
STEP: Collecting events for Pod kube-system/etcd-mhc-remediation-0r623x-control-plane-4tvc7
STEP: failed to find events of Pod "etcd-mhc-remediation-0r623x-control-plane-4tvc7"
STEP: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-0r623x-control-plane-4tvc7, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-0r623x-control-plane-4tvc7, container kube-scheduler
STEP: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-xp8f7
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-0r623x-control-plane-4tvc7, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-xp8f7, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-0r623x-control-plane-4tvc7
STEP: failed to find events of Pod "kube-controller-manager-mhc-remediation-0r623x-control-plane-4tvc7"
STEP: Creating log watcher for controller kube-system/etcd-mhc-remediation-0r623x-control-plane-4tvc7, container etcd
STEP: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-0r623x-control-plane-4tvc7
STEP: failed to find events of Pod "kube-scheduler-mhc-remediation-0r623x-control-plane-4tvc7"
STEP: Fetching activity logs took 3.620888049s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-jkflk9" namespace
STEP: Deleting cluster mhc-remediation-jkflk9/mhc-remediation-0r623x
STEP: Deleting cluster mhc-remediation-0r623x
INFO: Waiting for the Cluster mhc-remediation-jkflk9/mhc-remediation-0r623x to be deleted
STEP: Waiting for cluster mhc-remediation-0r623x to be deleted
... skipping 16 lines ...
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Sun, 01 Jan 2023 21:11:36 UTC on Ginkgo node 4 of 10
STEP: Creating namespace "self-hosted" for hosting the cluster
Jan  1 21:11:36.194: INFO: starting to create namespace for hosting the "self-hosted" test spec
2023/01/01 21:11:36 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-ujha7n" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-ujha7n --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 68 lines ...
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-rppb6, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/kube-proxy-vgxds
STEP: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-ujha7n-control-plane-8rxh6, container kube-apiserver
STEP: Collecting events for Pod kube-system/etcd-self-hosted-ujha7n-control-plane-8rxh6
STEP: Collecting events for Pod kube-system/kube-scheduler-self-hosted-ujha7n-control-plane-8rxh6
STEP: Collecting events for Pod kube-system/kube-apiserver-self-hosted-ujha7n-control-plane-8rxh6
STEP: failed to find events of Pod "kube-scheduler-self-hosted-ujha7n-control-plane-8rxh6"
STEP: failed to find events of Pod "kube-apiserver-self-hosted-ujha7n-control-plane-8rxh6"
STEP: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-rppb6
STEP: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-ujha7n-control-plane-8rxh6, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-node-2ntff, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-bd6b6df9f-9q5nk, container coredns
STEP: Collecting events for Pod kube-system/calico-node-fqjj5
STEP: Creating log watcher for controller kube-system/coredns-bd6b6df9f-bsxnd, container coredns
STEP: Collecting events for Pod kube-system/coredns-bd6b6df9f-9q5nk
STEP: Collecting events for Pod kube-system/coredns-bd6b6df9f-bsxnd
STEP: Creating log watcher for controller kube-system/kube-proxy-cqrvt, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-cqrvt
STEP: Creating log watcher for controller kube-system/calico-node-fqjj5, container calico-node
STEP: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-ujha7n-control-plane-8rxh6
STEP: failed to find events of Pod "kube-controller-manager-self-hosted-ujha7n-control-plane-8rxh6"
STEP: failed to find events of Pod "etcd-self-hosted-ujha7n-control-plane-8rxh6"
STEP: Fetching activity logs took 1.920472326s
Jan  1 21:22:14.483: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace
Jan  1 21:22:14.873: INFO: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-ujha7n
INFO: Waiting for the Cluster self-hosted/self-hosted-ujha7n to be deleted
STEP: Waiting for cluster self-hosted-ujha7n to be deleted
INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-8f6f78b8b-tj444, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-system/capi-controller-manager-5b6d47468d-lpz2t, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capz-system/capz-controller-manager-767ffc7f8-7khng, container manager: http2: client connection lost
INFO: Got error while streaming logs for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-66968bb4c5-m6dw6, container manager: http2: client connection lost
Jan  1 21:25:45.086: INFO: Deleting namespace used for hosting the "self-hosted" test spec
INFO: Deleting namespace self-hosted
Jan  1 21:25:45.110: INFO: Checking if any resources are left over in Azure for spec "self-hosted"
STEP: Redacting sensitive information from logs
Jan  1 21:26:21.759: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec
STEP: Redacting sensitive information from logs
... skipping 68 lines ...

Jan  1 21:20:06.075: INFO: Collecting logs for Windows node quick-sta-m9dgg in cluster quick-start-e437fi in namespace quick-start-4qu8yb

Jan  1 21:22:46.009: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-m9dgg to /logs/artifacts/clusters/quick-start-e437fi/machines/quick-start-e437fi-md-win-546d6cf75f-ckjtb/crashdumps.tar
Jan  1 21:22:47.774: INFO: Collecting boot logs for AzureMachine quick-start-e437fi-md-win-m9dgg

Failed to get logs for machine quick-start-e437fi-md-win-546d6cf75f-ckjtb, cluster quick-start-4qu8yb/quick-start-e437fi: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
Jan  1 21:22:48.557: INFO: Collecting logs for Windows node quick-sta-wqdhj in cluster quick-start-e437fi in namespace quick-start-4qu8yb

Jan  1 21:25:24.902: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-wqdhj to /logs/artifacts/clusters/quick-start-e437fi/machines/quick-start-e437fi-md-win-546d6cf75f-x9qr4/crashdumps.tar
Jan  1 21:25:26.769: INFO: Collecting boot logs for AzureMachine quick-start-e437fi-md-win-wqdhj

Failed to get logs for machine quick-start-e437fi-md-win-546d6cf75f-x9qr4, cluster quick-start-4qu8yb/quick-start-e437fi: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
STEP: Dumping workload cluster quick-start-4qu8yb/quick-start-e437fi kube-system pod logs
STEP: Collecting events for Pod kube-system/containerd-logger-29m6t
STEP: Creating log watcher for controller kube-system/csi-proxy-2k9c4, container csi-proxy
STEP: Collecting events for Pod kube-system/csi-proxy-f7tts
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-nfdm6, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/csi-proxy-2k9c4
... skipping 21 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-rvm44, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-kn9gd, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-lpd4p, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-windows-kn9gd
STEP: Collecting events for Pod kube-system/etcd-quick-start-e437fi-control-plane-f6nrq
STEP: Collecting events for Pod kube-system/kube-proxy-windows-lpd4p
STEP: failed to find events of Pod "etcd-quick-start-e437fi-control-plane-f6nrq"
STEP: Creating log watcher for controller kube-system/kube-scheduler-quick-start-e437fi-control-plane-f6nrq, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-proxy-rvm44
STEP: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-e437fi-control-plane-f6nrq, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-scheduler-quick-start-e437fi-control-plane-f6nrq
STEP: Collecting events for Pod kube-system/kube-proxy-kdv6f
STEP: failed to find events of Pod "kube-scheduler-quick-start-e437fi-control-plane-f6nrq"
STEP: Creating log watcher for controller kube-system/calico-node-windows-zlrmk, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-kdv6f, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-controller-manager-quick-start-e437fi-control-plane-f6nrq
STEP: failed to find events of Pod "kube-controller-manager-quick-start-e437fi-control-plane-f6nrq"
STEP: Collecting events for Pod kube-system/kube-apiserver-quick-start-e437fi-control-plane-f6nrq
STEP: failed to find events of Pod "kube-apiserver-quick-start-e437fi-control-plane-f6nrq"
STEP: Creating log watcher for controller kube-system/kube-apiserver-quick-start-e437fi-control-plane-f6nrq, container kube-apiserver
STEP: Fetching activity logs took 1.373047753s
STEP: Dumping all the Cluster API resources in the "quick-start-4qu8yb" namespace
STEP: Deleting cluster quick-start-4qu8yb/quick-start-e437fi
STEP: Deleting cluster quick-start-e437fi
INFO: Waiting for the Cluster quick-start-4qu8yb/quick-start-e437fi to be deleted
... skipping 93 lines ...
STEP: Dumping workload cluster machine-pool-rps1ce/machine-pool-np2dke Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-shn7z, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-np2dke-control-plane-vfb64, container kube-scheduler
STEP: Collecting events for Pod kube-system/calico-node-4gv4n
STEP: Creating log watcher for controller kube-system/calico-node-4gv4n, container calico-node
STEP: Collecting events for Pod kube-system/etcd-machine-pool-np2dke-control-plane-vfb64
STEP: failed to find events of Pod "etcd-machine-pool-np2dke-control-plane-vfb64"
STEP: Creating log watcher for controller kube-system/kube-proxy-wc5sn, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-fbnn7
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-np2dke-control-plane-vfb64, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-apiserver-machine-pool-np2dke-control-plane-vfb64
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-np2dke-control-plane-vfb64, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-nx6cs, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-wc5sn
STEP: failed to find events of Pod "kube-apiserver-machine-pool-np2dke-control-plane-vfb64"
STEP: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-np2dke-control-plane-vfb64
STEP: failed to find events of Pod "kube-controller-manager-machine-pool-np2dke-control-plane-vfb64"
STEP: Creating log watcher for controller kube-system/coredns-bd6b6df9f-9js62, container coredns
STEP: failed to find events of Pod "kube-scheduler-machine-pool-np2dke-control-plane-vfb64"
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-nx6cs, container kube-proxy: pods "win-p-win000002" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-wc5sn, container kube-proxy: pods "machine-pool-np2dke-mp-0000002" not found
STEP: Error starting logs stream for pod kube-system/calico-node-mpwv8, container calico-node: pods "machine-pool-np2dke-mp-0000002" not found
STEP: Error starting logs stream for pod kube-system/calico-node-windows-khttq, container calico-node-felix: pods "win-p-win000002" not found
STEP: Error starting logs stream for pod kube-system/calico-node-windows-khttq, container calico-node-startup: pods "win-p-win000002" not found
STEP: Fetching activity logs took 2.719451093s
STEP: Dumping all the Cluster API resources in the "machine-pool-rps1ce" namespace
STEP: Deleting cluster machine-pool-rps1ce/machine-pool-np2dke
STEP: Deleting cluster machine-pool-np2dke
INFO: Waiting for the Cluster machine-pool-rps1ce/machine-pool-np2dke to be deleted
STEP: Waiting for cluster machine-pool-np2dke to be deleted
... skipping 214 lines ...

Jan  1 21:23:28.064: INFO: Collecting logs for Windows node md-scale-dccn5 in cluster md-scale-7s6awa in namespace md-scale-sk78xz

Jan  1 21:26:07.930: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-dccn5 to /logs/artifacts/clusters/md-scale-7s6awa/machines/md-scale-7s6awa-md-win-68d6d6c44d-9z8mc/crashdumps.tar
Jan  1 21:26:09.686: INFO: Collecting boot logs for AzureMachine md-scale-7s6awa-md-win-dccn5

Failed to get logs for machine md-scale-7s6awa-md-win-68d6d6c44d-9z8mc, cluster md-scale-sk78xz/md-scale-7s6awa: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
Jan  1 21:26:10.611: INFO: Collecting logs for Windows node md-scale-tg2ch in cluster md-scale-7s6awa in namespace md-scale-sk78xz

Jan  1 21:28:49.181: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-tg2ch to /logs/artifacts/clusters/md-scale-7s6awa/machines/md-scale-7s6awa-md-win-68d6d6c44d-vk87m/crashdumps.tar
Jan  1 21:28:50.953: INFO: Collecting boot logs for AzureMachine md-scale-7s6awa-md-win-tg2ch

Failed to get logs for machine md-scale-7s6awa-md-win-68d6d6c44d-vk87m, cluster md-scale-sk78xz/md-scale-7s6awa: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
STEP: Dumping workload cluster md-scale-sk78xz/md-scale-7s6awa kube-system pod logs
STEP: Fetching kube-system pod logs took 410.07298ms
STEP: Dumping workload cluster md-scale-sk78xz/md-scale-7s6awa Azure activity log
STEP: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-8nbzt
STEP: Creating log watcher for controller kube-system/calico-node-tx2j7, container calico-node
STEP: Collecting events for Pod kube-system/containerd-logger-7hx2m
STEP: Creating log watcher for controller kube-system/containerd-logger-jx7bq, container containerd-logger
STEP: Collecting events for Pod kube-system/calico-node-tx2j7
STEP: Creating log watcher for controller kube-system/calico-node-windows-h9ln2, container calico-node-startup
STEP: Collecting events for Pod kube-system/containerd-logger-jx7bq
STEP: Creating log watcher for controller kube-system/coredns-bd6b6df9f-k5c5z, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-windows-h9ln2, container calico-node-felix
STEP: Collecting events for Pod kube-system/kube-controller-manager-md-scale-7s6awa-control-plane-qtc74
STEP: failed to find events of Pod "kube-controller-manager-md-scale-7s6awa-control-plane-qtc74"
STEP: Collecting events for Pod kube-system/coredns-bd6b6df9f-k5c5z
STEP: Creating log watcher for controller kube-system/coredns-bd6b6df9f-tqjw8, container coredns
STEP: Creating log watcher for controller kube-system/kube-proxy-4t9ss, container kube-proxy
STEP: Collecting events for Pod kube-system/calico-node-windows-h9ln2
STEP: Creating log watcher for controller kube-system/calico-node-windows-z8fvz, container calico-node-startup
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-wldzd, container kube-proxy
... skipping 10 lines ...
STEP: Collecting events for Pod kube-system/calico-node-windows-z8fvz
STEP: Creating log watcher for controller kube-system/calico-node-xdk9q, container calico-node
STEP: Collecting events for Pod kube-system/coredns-bd6b6df9f-tqjw8
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-8nbzt, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/kube-apiserver-md-scale-7s6awa-control-plane-qtc74, container kube-apiserver
STEP: Collecting events for Pod kube-system/calico-node-xdk9q
STEP: failed to find events of Pod "kube-scheduler-md-scale-7s6awa-control-plane-qtc74"
STEP: Collecting events for Pod kube-system/kube-proxy-windows-wldzd
STEP: Creating log watcher for controller kube-system/kube-scheduler-md-scale-7s6awa-control-plane-qtc74, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-proxy-l4tn6
STEP: Collecting events for Pod kube-system/etcd-md-scale-7s6awa-control-plane-qtc74
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-bhjpb, container kube-proxy
STEP: failed to find events of Pod "etcd-md-scale-7s6awa-control-plane-qtc74"
STEP: Collecting events for Pod kube-system/kube-apiserver-md-scale-7s6awa-control-plane-qtc74
STEP: failed to find events of Pod "kube-apiserver-md-scale-7s6awa-control-plane-qtc74"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-md-scale-7s6awa-control-plane-qtc74, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-proxy-windows-bhjpb
STEP: Fetching activity logs took 4.602254618s
STEP: Dumping all the Cluster API resources in the "md-scale-sk78xz" namespace
STEP: Deleting cluster md-scale-sk78xz/md-scale-7s6awa
STEP: Deleting cluster md-scale-7s6awa
... skipping 69 lines ...
STEP: Dumping logs from the "node-drain-v6gph9" workload cluster
STEP: Dumping workload cluster node-drain-frprye/node-drain-v6gph9 logs
Jan  1 21:26:55.370: INFO: Collecting logs for Linux node node-drain-v6gph9-control-plane-ghxzq in cluster node-drain-v6gph9 in namespace node-drain-frprye

Jan  1 21:33:29.734: INFO: Collecting boot logs for AzureMachine node-drain-v6gph9-control-plane-ghxzq

Failed to get logs for machine node-drain-v6gph9-control-plane-nsjfz, cluster node-drain-frprye/node-drain-v6gph9: dialing public load balancer at node-drain-v6gph9-5b4a24b5.canadacentral.cloudapp.azure.com: dial tcp 20.175.153.160:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-frprye/node-drain-v6gph9 kube-system pod logs
STEP: Fetching kube-system pod logs took 403.93448ms
STEP: Creating log watcher for controller kube-system/etcd-node-drain-v6gph9-control-plane-ghxzq, container etcd
STEP: Creating log watcher for controller kube-system/kube-scheduler-node-drain-v6gph9-control-plane-ghxzq, container kube-scheduler
STEP: Collecting events for Pod kube-system/calico-node-ffx2w
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-jklc2, container calico-kube-controllers
... skipping 30 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45
  Should successfully set and use node drain timeout
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:183
    A node should be forcefully removed if it cannot be drained in time
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.6/e2e/node_drain_timeout.go:83
------------------------------
{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2023-01-02T01:02:51Z"}
++ early_exit_handler
++ '[' -n 164 ']'
++ kill -TERM 164
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-02T01:17:51Z"}
{"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-02T01:17:51Z"}