This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 7 succeeded
Started2022-09-24 20:36
Elapsed4h15m
Revisionrelease-1.5

No Test Failures!


Show 7 Passed Tests

Show 14 Skipped Tests

Error lines from build-log.txt

... skipping 595 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-9szufq-control-plane-0, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-9szufq-control-plane-0, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-controller-manager-kcp-adoption-9szufq-control-plane-0
STEP: Collecting events for Pod kube-system/kube-scheduler-kcp-adoption-9szufq-control-plane-0
STEP: Dumping workload cluster kcp-adoption-nuvgna/kcp-adoption-9szufq Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-6m9hh, container kube-proxy
STEP: failed to find events of Pod "kube-controller-manager-kcp-adoption-9szufq-control-plane-0"
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-4p9xh, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-4p9xh
STEP: failed to find events of Pod "kube-scheduler-kcp-adoption-9szufq-control-plane-0"
STEP: Collecting events for Pod kube-system/etcd-kcp-adoption-9szufq-control-plane-0
STEP: Collecting events for Pod kube-system/calico-node-j6qtr
STEP: failed to find events of Pod "etcd-kcp-adoption-9szufq-control-plane-0"
STEP: Collecting events for Pod kube-system/coredns-64897985d-4nlnh
STEP: Creating log watcher for controller kube-system/coredns-64897985d-k9kx4, container coredns
STEP: Collecting events for Pod kube-system/coredns-64897985d-k9kx4
STEP: Creating log watcher for controller kube-system/etcd-kcp-adoption-9szufq-control-plane-0, container etcd
STEP: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-9szufq-control-plane-0, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-proxy-6m9hh
STEP: Creating log watcher for controller kube-system/calico-node-j6qtr, container calico-node
STEP: Collecting events for Pod kube-system/kube-apiserver-kcp-adoption-9szufq-control-plane-0
STEP: failed to find events of Pod "kube-apiserver-kcp-adoption-9szufq-control-plane-0"
STEP: Fetching activity logs took 1.932978235s
STEP: Dumping all the Cluster API resources in the "kcp-adoption-nuvgna" namespace
STEP: Deleting cluster kcp-adoption-nuvgna/kcp-adoption-9szufq
STEP: Deleting cluster kcp-adoption-9szufq
INFO: Waiting for the Cluster kcp-adoption-nuvgna/kcp-adoption-9szufq to be deleted
STEP: Waiting for cluster kcp-adoption-9szufq to be deleted
... skipping 77 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-s6cnx, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-64897985d-t2p2m, container coredns
STEP: Collecting events for Pod kube-system/kube-proxy-s6cnx
STEP: Collecting events for Pod kube-system/calico-node-q6r25
STEP: Collecting events for Pod kube-system/kube-apiserver-mhc-remediation-fdfyfg-control-plane-67vqq
STEP: Creating log watcher for controller kube-system/coredns-64897985d-w9784, container coredns
STEP: failed to find events of Pod "kube-apiserver-mhc-remediation-fdfyfg-control-plane-67vqq"
STEP: Collecting events for Pod kube-system/etcd-mhc-remediation-fdfyfg-control-plane-67vqq
STEP: Creating log watcher for controller kube-system/calico-node-rcg2f, container calico-node
STEP: failed to find events of Pod "etcd-mhc-remediation-fdfyfg-control-plane-67vqq"
STEP: Creating log watcher for controller kube-system/kube-apiserver-mhc-remediation-fdfyfg-control-plane-67vqq, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-xbz68, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-mhc-remediation-fdfyfg-control-plane-67vqq, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-scheduler-mhc-remediation-fdfyfg-control-plane-67vqq
STEP: Collecting events for Pod kube-system/calico-node-rcg2f
STEP: failed to find events of Pod "kube-scheduler-mhc-remediation-fdfyfg-control-plane-67vqq"
STEP: Collecting events for Pod kube-system/kube-proxy-xbz68
STEP: Creating log watcher for controller kube-system/kube-scheduler-mhc-remediation-fdfyfg-control-plane-67vqq, container kube-scheduler
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-csfbr, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/coredns-64897985d-w9784
STEP: Collecting events for Pod kube-system/kube-controller-manager-mhc-remediation-fdfyfg-control-plane-67vqq
STEP: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-csfbr
STEP: failed to find events of Pod "kube-controller-manager-mhc-remediation-fdfyfg-control-plane-67vqq"
STEP: Fetching activity logs took 1.763622925s
STEP: Dumping all the Cluster API resources in the "mhc-remediation-efpqlo" namespace
STEP: Deleting cluster mhc-remediation-efpqlo/mhc-remediation-fdfyfg
STEP: Deleting cluster mhc-remediation-fdfyfg
INFO: Waiting for the Cluster mhc-remediation-efpqlo/mhc-remediation-fdfyfg to be deleted
STEP: Waiting for cluster mhc-remediation-fdfyfg to be deleted
... skipping 94 lines ...
STEP: Creating log watcher for controller kube-system/coredns-64897985d-v9ls5, container coredns
STEP: Creating log watcher for controller kube-system/etcd-machine-pool-vbpda4-control-plane-8frgf, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-dztw5, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-dztw5
STEP: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-vbpda4-control-plane-8frgf, container kube-apiserver
STEP: Collecting events for Pod kube-system/etcd-machine-pool-vbpda4-control-plane-8frgf
STEP: failed to find events of Pod "etcd-machine-pool-vbpda4-control-plane-8frgf"
STEP: Collecting events for Pod kube-system/calico-node-windows-cr96t
STEP: Creating log watcher for controller kube-system/calico-node-windows-cr96t, container calico-node-felix
STEP: Creating log watcher for controller kube-system/coredns-64897985d-65tmx, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-tdh2r, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-vbpda4-control-plane-8frgf, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-scheduler-machine-pool-vbpda4-control-plane-8frgf
STEP: Collecting events for Pod kube-system/coredns-64897985d-v9ls5
STEP: Error starting logs stream for pod kube-system/kube-proxy-jrwdb, container kube-proxy: pods "machine-pool-vbpda4-mp-0000002" not found
STEP: Error starting logs stream for pod kube-system/kube-proxy-windows-s2rtn, container kube-proxy: pods "win-p-win000002" not found
STEP: Error starting logs stream for pod kube-system/calico-node-windows-cr96t, container calico-node-startup: pods "win-p-win000002" not found
STEP: Error starting logs stream for pod kube-system/calico-node-tdh2r, container calico-node: pods "machine-pool-vbpda4-mp-0000002" not found
STEP: Error starting logs stream for pod kube-system/calico-node-windows-cr96t, container calico-node-felix: pods "win-p-win000002" not found
STEP: Fetching activity logs took 1.937821084s
STEP: Dumping all the Cluster API resources in the "machine-pool-i1x5pl" namespace
STEP: Deleting cluster machine-pool-i1x5pl/machine-pool-vbpda4
STEP: Deleting cluster machine-pool-vbpda4
INFO: Waiting for the Cluster machine-pool-i1x5pl/machine-pool-vbpda4 to be deleted
STEP: Waiting for cluster machine-pool-vbpda4 to be deleted
... skipping 72 lines ...

Sep 24 20:55:12.500: INFO: Collecting logs for Windows node quick-sta-klrnx in cluster quick-start-3keqqb in namespace quick-start-wbj5wk

Sep 24 20:57:58.914: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-klrnx to /logs/artifacts/clusters/quick-start-3keqqb/machines/quick-start-3keqqb-md-win-7967c496b9-9n6hv/crashdumps.tar
Sep 24 20:58:01.320: INFO: Collecting boot logs for AzureMachine quick-start-3keqqb-md-win-klrnx

Failed to get logs for machine quick-start-3keqqb-md-win-7967c496b9-9n6hv, cluster quick-start-wbj5wk/quick-start-3keqqb: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
Sep 24 20:58:02.253: INFO: Collecting logs for Windows node quick-sta-grxpd in cluster quick-start-3keqqb in namespace quick-start-wbj5wk

Sep 24 21:00:48.102: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-grxpd to /logs/artifacts/clusters/quick-start-3keqqb/machines/quick-start-3keqqb-md-win-7967c496b9-shn52/crashdumps.tar
Sep 24 21:00:50.484: INFO: Collecting boot logs for AzureMachine quick-start-3keqqb-md-win-grxpd

Failed to get logs for machine quick-start-3keqqb-md-win-7967c496b9-shn52, cluster quick-start-wbj5wk/quick-start-3keqqb: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
STEP: Dumping workload cluster quick-start-wbj5wk/quick-start-3keqqb kube-system pod logs
STEP: Fetching kube-system pod logs took 634.731881ms
STEP: Dumping workload cluster quick-start-wbj5wk/quick-start-3keqqb Azure activity log
STEP: Creating log watcher for controller kube-system/calico-node-windows-zvwg9, container calico-node-startup
STEP: Collecting events for Pod kube-system/calico-node-windows-zvwg9
STEP: Creating log watcher for controller kube-system/coredns-64897985d-xljmg, container coredns
... skipping 121 lines ...

Sep 24 20:58:39.351: INFO: Collecting logs for Windows node md-scale-tllw2 in cluster md-scale-jgfrja in namespace md-scale-m4ttwa

Sep 24 21:01:22.894: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-tllw2 to /logs/artifacts/clusters/md-scale-jgfrja/machines/md-scale-jgfrja-md-win-756b54d4b6-87k59/crashdumps.tar
Sep 24 21:01:25.366: INFO: Collecting boot logs for AzureMachine md-scale-jgfrja-md-win-tllw2

Failed to get logs for machine md-scale-jgfrja-md-win-756b54d4b6-87k59, cluster md-scale-m4ttwa/md-scale-jgfrja: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
Sep 24 21:01:26.221: INFO: Collecting logs for Windows node md-scale-kv9r2 in cluster md-scale-jgfrja in namespace md-scale-m4ttwa

Sep 24 21:04:11.760: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-kv9r2 to /logs/artifacts/clusters/md-scale-jgfrja/machines/md-scale-jgfrja-md-win-756b54d4b6-lb6ks/crashdumps.tar
Sep 24 21:04:14.235: INFO: Collecting boot logs for AzureMachine md-scale-jgfrja-md-win-kv9r2

Failed to get logs for machine md-scale-jgfrja-md-win-756b54d4b6-lb6ks, cluster md-scale-m4ttwa/md-scale-jgfrja: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1]
STEP: Dumping workload cluster md-scale-m4ttwa/md-scale-jgfrja kube-system pod logs
STEP: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-gvkgm
STEP: Collecting events for Pod kube-system/coredns-64897985d-r6svw
STEP: Creating log watcher for controller kube-system/calico-node-windows-p2khr, container calico-node-felix
STEP: Collecting events for Pod kube-system/csi-proxy-sck9r
STEP: Creating log watcher for controller kube-system/calico-node-llb6k, container calico-node
... skipping 110 lines ...
STEP: Dumping logs from the "node-drain-fzxvs3" workload cluster
STEP: Dumping workload cluster node-drain-wpycfw/node-drain-fzxvs3 logs
Sep 24 21:00:35.562: INFO: Collecting logs for Linux node node-drain-fzxvs3-control-plane-rbfln in cluster node-drain-fzxvs3 in namespace node-drain-wpycfw

Sep 24 21:07:10.075: INFO: Collecting boot logs for AzureMachine node-drain-fzxvs3-control-plane-rbfln

Failed to get logs for machine node-drain-fzxvs3-control-plane-dbdnn, cluster node-drain-wpycfw/node-drain-fzxvs3: dialing public load balancer at node-drain-fzxvs3-69673d97.westus2.cloudapp.azure.com: dial tcp 20.64.194.146:22: connect: connection timed out
STEP: Dumping workload cluster node-drain-wpycfw/node-drain-fzxvs3 kube-system pod logs
STEP: Fetching kube-system pod logs took 651.607373ms
STEP: Creating log watcher for controller kube-system/etcd-node-drain-fzxvs3-control-plane-rbfln, container etcd
STEP: Collecting events for Pod kube-system/kube-apiserver-node-drain-fzxvs3-control-plane-rbfln
STEP: Collecting events for Pod kube-system/calico-node-5hdrd
STEP: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-fnplr
... skipping 173 lines ...
  Should pivot the bootstrap cluster to a self-hosted cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Sat, 24 Sep 2022 20:45:12 UTC on Ginkgo node 1 of 10
STEP: Creating namespace "self-hosted" for hosting the cluster
Sep 24 20:45:12.280: INFO: starting to create namespace for hosting the "self-hosted" test spec
2022/09/24 20:45:12 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found
INFO: Creating namespace self-hosted
INFO: Creating event watcher for namespace "self-hosted"
STEP: Creating a workload cluster
INFO: Creating the workload cluster with name "self-hosted-ff75yz" using the "management" template (Kubernetes v1.23.12, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
INFO: clusterctl config cluster self-hosted-ff75yz --infrastructure (default) --kubernetes-version v1.23.12 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management
... skipping 79 lines ...
Sep 24 20:56:39.099: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace
Sep 24 20:56:39.454: INFO: Deleting all clusters in the self-hosted namespace
STEP: Deleting cluster self-hosted-ff75yz
INFO: Waiting for the Cluster self-hosted/self-hosted-ff75yz to be deleted
STEP: Waiting for cluster self-hosted-ff75yz to be deleted
STEP: Redacting sensitive information from logs
Sep 24 21:27:28.386: INFO: FAILED!
Sep 24 21:27:28.386: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec
STEP: Redacting sensitive information from logs
INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" ran for 43m6s on Ginkgo node 1 of 10


• Failure [2586.511 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:45
  Running the self-hosted spec
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:134
    Should pivot the bootstrap cluster to a self-hosted cluster [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107

    Failed to run clusterctl move
    Expected success, but got an error:
        <errors.aggregate | len:3, cap:4>: [
            <*errors.withStack | 0xc000d01590>{
                error: <*errors.withMessage | 0xc0021792c0>{
                    cause: <*errors.withStack | 0xc000d01560>{
                        error: <*errors.withMessage | 0xc0021792a0>{
                            cause: <*errors.StatusError | 0xc0007d06e0>{
                                ErrStatus: {
                                    TypeMeta: {Kind: "Status", APIVersion: "v1"},
                                    ListMeta: {
                                        SelfLink: "",
                                        ResourceVersion: "",
                                        Continue: "",
                                        RemainingItemCount: nil,
                                    },
                                    Status: "Failure",
                                    Message: "Internal error occurred: failed calling webhook \"default.azuremachinetemplate.infrastructure.cluster.x-k8s.io\": failed to call webhook: the server could not find the requested resource",
                                    Reason: "InternalError",
                                    Details: {
                                        Name: "",
                                        Group: "",
                                        Kind: "",
                                        UID: "",
... skipping 2 lines ...
                                        ],
                                        RetryAfterSeconds: 0,
                                    },
                                    Code: 500,
                                },
                            },
                            msg: "error creating \"infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureMachineTemplate\" self-hosted/self-hosted-ff75yz-control-plane",
                        },
                        stack: [0x2c3792e, 0x2c35cec, 0x2c27071, 0x20a287b, 0x20a2957, 0x20a28d9, 0x20a321f, 0x2c26f45, 0x2c35a6b, 0x2c329fc, 0x2c30905, 0x2c7c58f, 0x2c842b9, 0x3168548, 0x17296f1, 0x17290e5, 0x172817b, 0x172e9aa, 0x172e3a7, 0x173ab68, 0x173a885, 0x1739f25, 0x173c572, 0x1749a49, 0x1749856, 0x3188498, 0x1407382, 0x1342321],
                    },
                    msg: "action failed after 10 attempts",
                },
                stack: [0x2c26fa5, 0x2c35a6b, 0x2c329fc, 0x2c30905, 0x2c7c58f, 0x2c842b9, 0x3168548, 0x17296f1, 0x17290e5, 0x172817b, 0x172e9aa, 0x172e3a7, 0x173ab68, 0x173a885, 0x1739f25, 0x173c572, 0x1749a49, 0x1749856, 0x3188498, 0x1407382, 0x1342321],
            },
            <*errors.withStack | 0xc002940480>{
                error: <*errors.withMessage | 0xc0002f3160>{
                    cause: <*errors.withStack | 0xc002940450>{
                        error: <*errors.withMessage | 0xc0002f3140>{
                            cause: <*errors.StatusError | 0xc001c8f360>{
                                ErrStatus: {
                                    TypeMeta: {Kind: "Status", APIVersion: "v1"},
                                    ListMeta: {
                                        SelfLink: "",
                                        ResourceVersion: "",
                                        Continue: "",
                                        RemainingItemCount: nil,
                                    },
                                    Status: "Failure",
                                    Message: "Internal error occurred: failed calling webhook \"default.azurecluster.infrastructure.cluster.x-k8s.io\": failed to call webhook: the server could not find the requested resource",
                                    Reason: "InternalError",
                                    Details: {
                                        Name: "",
                                        Group: "",
                                        Kind: "",
                                        UID: "",
... skipping 5 lines ...
    Gomega truncated this representation as it exceeds 'format.MaxLength'.
    Consider having the object provide a custom 'GomegaStringer' representation
    or adjust the parameters in Gomega's 'format' package.
    
    Learn more here: https://onsi.github.io/gomega/#adjusting-output
    
        [action failed after 10 attempts: error creating "infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureMachineTemplate" self-hosted/self-hosted-ff75yz-control-plane: Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource, action failed after 10 attempts: error creating "infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureCluster" self-hosted/self-hosted-ff75yz: Internal error occurred: failed calling webhook "default.azurecluster.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource, action failed after 10 attempts: error creating "infrastructure.cluster.x-k8s.io/v1beta1, Kind=AzureMachineTemplate" self-hosted/self-hosted-ff75yz-md-0: Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource]

    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/framework/clusterctl/client.go:322

    Full Stack Trace
    sigs.k8s.io/cluster-api/test/framework/clusterctl.Move({0x3d544a0?, 0xc0000620b0?}, {{0xc00285d920, 0x22}, {0xc001dc6010, 0x31}, {0xc001dc86ea, 0x16}, {0xc001999d80, 0x1d}, ...})
    	/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/framework/clusterctl/client.go:322 +0x4e8
... skipping 25 lines ...
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:57 +0x198
    testing.tRunner(0xc0000ddba0, 0x3a51a98)
    	/usr/local/go/src/testing/testing.go:1439 +0x102
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1486 +0x35f
------------------------------
{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2022-09-25T00:36:10Z"}
++ early_exit_handler
++ '[' -n 156 ']'
++ kill -TERM 156
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 12 lines ...
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
All sensitive variables are redacted
{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-09-25T00:51:10Z"}
{"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-09-25T00:51:10Z"}