This job view page is being replaced by Spyglass soon.
Check out the new job view.
Error lines from build-log.txt
... skipping 183 lines ...
#18 exporting to image
#18 exporting layers
#18 exporting layers 0.6s done
#18 writing image sha256:5c6a25d9a06ffe55d3dd3c3ebe0d108414e1494ae311a30946c4463bad202eac done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.6s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com]
Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]...
/ [0 files][ 0.0 B/ 74.6 MiB]
-
- [1 files][ 74.6 MiB/ 74.6 MiB]
\
Operation completed over 1 objects/74.6 MiB.
make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools'
... skipping 129 lines ...
#18 exporting to image
#18 exporting layers done
#18 writing image sha256:5c6a25d9a06ffe55d3dd3c3ebe0d108414e1494ae311a30946c4463bad202eac done
#18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done
#18 DONE 0.0s
WARNING: failed to get git remote url: fatal: No remote configured to list refs from.
make release-manifests
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
make generate-flavors FLAVOR_DIR=out
make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere'
... skipping 312 lines ...
INFO: Waiting for the machine deployments to be provisioned
[1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 02/01/23 17:38:09.962[0m
[1mSTEP:[0m Checking all the machines controlled by quick-start-0rxo0g-md-0 are in the "<None>" failure domain [38;5;243m@ 02/01/23 17:39:50.099[0m
INFO: Waiting for the machine pools to be provisioned
[1mSTEP:[0m PASSED! [38;5;243m@ 02/01/23 17:39:50.15[0m
[1mSTEP:[0m Dumping logs from the "quick-start-0rxo0g" workload cluster [38;5;243m@ 02/01/23 17:39:50.151[0m
Failed to get logs for Machine quick-start-0rxo0g-2t5gp, Cluster quick-start-6fywmx/quick-start-0rxo0g: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-0rxo0g-md-0-85f75f577f-jjjxg, Cluster quick-start-6fywmx/quick-start-0rxo0g: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-6fywmx" namespace [38;5;243m@ 02/01/23 17:39:54.748[0m
[1mSTEP:[0m Deleting cluster quick-start-6fywmx/quick-start-0rxo0g [38;5;243m@ 02/01/23 17:39:55.06[0m
[1mSTEP:[0m Deleting cluster quick-start-0rxo0g [38;5;243m@ 02/01/23 17:39:55.08[0m
INFO: Waiting for the Cluster quick-start-6fywmx/quick-start-0rxo0g to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-0rxo0g to be deleted [38;5;243m@ 02/01/23 17:39:55.094[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 02/01/23 17:40:25.118[0m
... skipping 44 lines ...
INFO: Waiting for the machine deployments to be provisioned
[1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 02/01/23 17:42:56.588[0m
[1mSTEP:[0m Checking all the machines controlled by quick-start-2wp1m1-md-0 are in the "<None>" failure domain [38;5;243m@ 02/01/23 17:43:56.678[0m
INFO: Waiting for the machine pools to be provisioned
[1mSTEP:[0m PASSED! [38;5;243m@ 02/01/23 17:43:56.723[0m
[1mSTEP:[0m Dumping logs from the "quick-start-2wp1m1" workload cluster [38;5;243m@ 02/01/23 17:43:56.723[0m
Failed to get logs for Machine quick-start-2wp1m1-md-0-6c886b687f-mv6w4, Cluster quick-start-7uydp1/quick-start-2wp1m1: dialing host IP address at 192.168.6.105: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Failed to get logs for Machine quick-start-2wp1m1-mk27v, Cluster quick-start-7uydp1/quick-start-2wp1m1: dialing host IP address at 192.168.6.23: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-7uydp1" namespace [38;5;243m@ 02/01/23 17:43:59.381[0m
[1mSTEP:[0m Deleting cluster quick-start-7uydp1/quick-start-2wp1m1 [38;5;243m@ 02/01/23 17:43:59.722[0m
[1mSTEP:[0m Deleting cluster quick-start-2wp1m1 [38;5;243m@ 02/01/23 17:43:59.744[0m
INFO: Waiting for the Cluster quick-start-7uydp1/quick-start-2wp1m1 to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-2wp1m1 to be deleted [38;5;243m@ 02/01/23 17:43:59.758[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 02/01/23 17:44:29.78[0m
... skipping 116 lines ...
INFO: Waiting for correct number of replicas to exist
[1mSTEP:[0m Scaling the MachineDeployment down to 1 [38;5;243m@ 02/01/23 17:58:23.994[0m
INFO: Scaling machine deployment md-scale-0yr6ud/md-scale-tw9frv-md-0 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
[1mSTEP:[0m PASSED! [38;5;243m@ 02/01/23 17:58:34.123[0m
[1mSTEP:[0m Dumping logs from the "md-scale-tw9frv" workload cluster [38;5;243m@ 02/01/23 17:58:34.124[0m
Failed to get logs for Machine md-scale-tw9frv-md-0-555dc75c59-z2g55, Cluster md-scale-0yr6ud/md-scale-tw9frv: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-scale-tw9frv-mzm76, Cluster md-scale-0yr6ud/md-scale-tw9frv: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "md-scale-0yr6ud" namespace [38;5;243m@ 02/01/23 17:58:38.671[0m
[1mSTEP:[0m Deleting cluster md-scale-0yr6ud/md-scale-tw9frv [38;5;243m@ 02/01/23 17:58:39.003[0m
[1mSTEP:[0m Deleting cluster md-scale-tw9frv [38;5;243m@ 02/01/23 17:58:39.026[0m
INFO: Waiting for the Cluster md-scale-0yr6ud/md-scale-tw9frv to be deleted
[1mSTEP:[0m Waiting for cluster md-scale-tw9frv to be deleted [38;5;243m@ 02/01/23 17:58:39.042[0m
[1mSTEP:[0m Deleting namespace used for hosting the "md-scale" test spec [38;5;243m@ 02/01/23 17:59:09.063[0m
... skipping 57 lines ...
INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete.
[1mSTEP:[0m Deleting a MachineDeploymentTopology in the Cluster Topology and wait for associated MachineDeployment to be deleted [38;5;243m@ 02/01/23 18:03:41.023[0m
INFO: Removing MachineDeploymentTopology from the Cluster Topology.
INFO: Waiting for MachineDeployment to be deleted.
[1mSTEP:[0m PASSED! [38;5;243m@ 02/01/23 18:03:51.106[0m
[1mSTEP:[0m Dumping logs from the "clusterclass-changes-h1ntqx" workload cluster [38;5;243m@ 02/01/23 18:03:51.107[0m
Failed to get logs for Machine clusterclass-changes-h1ntqx-psnbg-bnkss, Cluster clusterclass-changes-xnmxvv/clusterclass-changes-h1ntqx: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "clusterclass-changes-xnmxvv" namespace [38;5;243m@ 02/01/23 18:03:53.058[0m
[1mSTEP:[0m Deleting cluster clusterclass-changes-xnmxvv/clusterclass-changes-h1ntqx [38;5;243m@ 02/01/23 18:03:53.358[0m
[1mSTEP:[0m Deleting cluster clusterclass-changes-h1ntqx [38;5;243m@ 02/01/23 18:03:53.377[0m
INFO: Waiting for the Cluster clusterclass-changes-xnmxvv/clusterclass-changes-h1ntqx to be deleted
[1mSTEP:[0m Waiting for cluster clusterclass-changes-h1ntqx to be deleted [38;5;243m@ 02/01/23 18:03:53.39[0m
[1mSTEP:[0m Deleting namespace used for hosting the "clusterclass-changes" test spec [38;5;243m@ 02/01/23 18:04:13.408[0m
... skipping 46 lines ...
INFO: Waiting for the machine deployments to be provisioned
[1mSTEP:[0m Waiting for the workload nodes to exist [38;5;243m@ 02/01/23 18:07:14.909[0m
[1mSTEP:[0m Checking all the machines controlled by quick-start-w5glqh-md-0-x9fv8 are in the "<None>" failure domain [38;5;243m@ 02/01/23 18:08:14.994[0m
INFO: Waiting for the machine pools to be provisioned
[1mSTEP:[0m PASSED! [38;5;243m@ 02/01/23 18:08:15.045[0m
[1mSTEP:[0m Dumping logs from the "quick-start-w5glqh" workload cluster [38;5;243m@ 02/01/23 18:08:15.045[0m
Failed to get logs for Machine quick-start-w5glqh-98kz9-npt4s, Cluster quick-start-6pafvp/quick-start-w5glqh: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine quick-start-w5glqh-md-0-x9fv8-8878579fd-s2pb2, Cluster quick-start-6pafvp/quick-start-w5glqh: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "quick-start-6pafvp" namespace [38;5;243m@ 02/01/23 18:08:19.352[0m
[1mSTEP:[0m Deleting cluster quick-start-6pafvp/quick-start-w5glqh [38;5;243m@ 02/01/23 18:08:19.675[0m
[1mSTEP:[0m Deleting cluster quick-start-w5glqh [38;5;243m@ 02/01/23 18:08:19.696[0m
INFO: Waiting for the Cluster quick-start-6pafvp/quick-start-w5glqh to be deleted
[1mSTEP:[0m Waiting for cluster quick-start-w5glqh to be deleted [38;5;243m@ 02/01/23 18:08:19.712[0m
[1mSTEP:[0m Deleting namespace used for hosting the "quick-start" test spec [38;5;243m@ 02/01/23 18:08:49.739[0m
... skipping 50 lines ...
INFO: Waiting for rolling upgrade to start.
INFO: Waiting for MachineDeployment rolling upgrade to start
INFO: Waiting for rolling upgrade to complete.
INFO: Waiting for MachineDeployment rolling upgrade to complete
[1mSTEP:[0m PASSED! [38;5;243m@ 02/01/23 18:15:51.439[0m
[1mSTEP:[0m Dumping logs from the "md-rollout-jmqcb3" workload cluster [38;5;243m@ 02/01/23 18:15:51.44[0m
Failed to get logs for Machine md-rollout-jmqcb3-4ggmb, Cluster md-rollout-1bn1n8/md-rollout-jmqcb3: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
Failed to get logs for Machine md-rollout-jmqcb3-md-0-564d64b9db-bb6dg, Cluster md-rollout-1bn1n8/md-rollout-jmqcb3: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "md-rollout-1bn1n8" namespace [38;5;243m@ 02/01/23 18:15:55.856[0m
[1mSTEP:[0m Deleting cluster md-rollout-1bn1n8/md-rollout-jmqcb3 [38;5;243m@ 02/01/23 18:15:56.171[0m
[1mSTEP:[0m Deleting cluster md-rollout-jmqcb3 [38;5;243m@ 02/01/23 18:15:56.191[0m
INFO: Waiting for the Cluster md-rollout-1bn1n8/md-rollout-jmqcb3 to be deleted
[1mSTEP:[0m Waiting for cluster md-rollout-jmqcb3 to be deleted [38;5;243m@ 02/01/23 18:15:56.206[0m
[1mSTEP:[0m Deleting namespace used for hosting the "md-rollout" test spec [38;5;243m@ 02/01/23 18:16:26.231[0m
... skipping 56 lines ...
[1mSTEP:[0m Waiting for deployment node-drain-07jqiq-unevictable-workload/unevictable-pod-a12 to be available [38;5;243m@ 02/01/23 18:24:03.431[0m
[1mSTEP:[0m Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked. [38;5;243m@ 02/01/23 18:24:13.761[0m
INFO: Scaling controlplane node-drain-07jqiq/node-drain-7etr9f from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
[1mSTEP:[0m PASSED! [38;5;243m@ 02/01/23 18:28:14.407[0m
[1mSTEP:[0m Dumping logs from the "node-drain-7etr9f" workload cluster [38;5;243m@ 02/01/23 18:28:14.407[0m
Failed to get logs for Machine node-drain-7etr9f-7b7cj, Cluster node-drain-07jqiq/node-drain-7etr9f: running command "cat /var/log/cloud-init-output.log": Process exited with status 1
[1mSTEP:[0m Dumping all the Cluster API resources in the "node-drain-07jqiq" namespace [38;5;243m@ 02/01/23 18:28:16.465[0m
[1mSTEP:[0m Deleting cluster node-drain-07jqiq/node-drain-7etr9f [38;5;243m@ 02/01/23 18:28:16.772[0m
[1mSTEP:[0m Deleting cluster node-drain-7etr9f [38;5;243m@ 02/01/23 18:28:16.79[0m
INFO: Waiting for the Cluster node-drain-07jqiq/node-drain-7etr9f to be deleted
[1mSTEP:[0m Waiting for cluster node-drain-7etr9f to be deleted [38;5;243m@ 02/01/23 18:28:16.803[0m
[1mSTEP:[0m Deleting namespace used for hosting the "node-drain" test spec [38;5;243m@ 02/01/23 18:28:46.832[0m
... skipping 79 lines ...
[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m[38;5;14mS[0m
[38;5;243m------------------------------[0m
[0m[1m[SynchronizedAfterSuite] [0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/e2e_suite_test.go:159[0m
[1mSTEP:[0m Cleaning up the vSphere session [38;5;243m@ 02/01/23 18:28:47.421[0m
[1mSTEP:[0m Tearing down the management cluster [38;5;243m@ 02/01/23 18:28:47.593[0m
Error from server (Forbidden): error when creating "STDIN": configmaps "cpi-manifests" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": configmaps "csi.vsphere.vmware.com" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": configmaps "vsphere-csi-controller" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": configmaps "vsphere-csi-controller-binding" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": configmaps "vsphere-csi-controller-role" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": configmaps "vsphere-csi-node" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": configmaps "cni-dhcp-overrides-cjnnkh-crs-cni" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": secrets "dhcp-overrides-cjnnkh" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": secrets "cloud-controller-manager" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": secrets "cloud-provider-vsphere-credentials" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": secrets "csi-vsphere-config" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": secrets "vsphere-csi-controller" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": clusterresourcesets.addons.cluster.x-k8s.io "dhcp-overrides-cjnnkh-crs-0" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": clusterresourcesets.addons.cluster.x-k8s.io "dhcp-overrides-cjnnkh-crs-cni" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io "dhcp-overrides-cjnnkh-md-0" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
Error from server (Forbidden): error when creating "STDIN": clusters.cluster.x-k8s.io "dhcp-overrides-cjnnkh" is forbidden: unable to create new content in namespace dhcp-overrides-cf8n7o because it is being terminated
error when retrieving current configuration of:
Resource: "cluster.x-k8s.io/v1beta1, Resource=machinedeployments", GroupVersionKind: "cluster.x-k8s.io/v1beta1, Kind=MachineDeployment"
Name: "dhcp-overrides-cjnnkh-md-0", Namespace: "dhcp-overrides-cf8n7o"
from server for: "STDIN": Get "https://127.0.0.1:35481/apis/cluster.x-k8s.io/v1beta1/namespaces/dhcp-overrides-cf8n7o/machinedeployments/dhcp-overrides-cjnnkh-md-0": dial tcp 127.0.0.1:35481: connect: connection refused - error from a previous attempt: unexpected EOF
error when retrieving current configuration of:
Resource: "controlplane.cluster.x-k8s.io/v1beta1, Resource=kubeadmcontrolplanes", GroupVersionKind: "controlplane.cluster.x-k8s.io/v1beta1, Kind=KubeadmControlPlane"
Name: "dhcp-overrides-cjnnkh", Namespace: "dhcp-overrides-cf8n7o"
from server for: "STDIN": Get "https://127.0.0.1:35481/apis/controlplane.cluster.x-k8s.io/v1beta1/namespaces/dhcp-overrides-cf8n7o/kubeadmcontrolplanes/dhcp-overrides-cjnnkh": dial tcp 127.0.0.1:35481: connect: connection refused
error when retrieving current configuration of:
Resource: "infrastructure.cluster.x-k8s.io/v1beta1, Resource=vsphereclusters", GroupVersionKind: "infrastructure.cluster.x-k8s.io/v1beta1, Kind=VSphereCluster"
Name: "dhcp-overrides-cjnnkh", Namespace: "dhcp-overrides-cf8n7o"
from server for: "STDIN": Get "https://127.0.0.1:35481/apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/dhcp-overrides-cf8n7o/vsphereclusters/dhcp-overrides-cjnnkh": dial tcp 127.0.0.1:35481: connect: connection refused
error when retrieving current configuration of:
Resource: "infrastructure.cluster.x-k8s.io/v1beta1, Resource=vspheremachinetemplates", GroupVersionKind: "infrastructure.cluster.x-k8s.io/v1beta1, Kind=VSphereMachineTemplate"
Name: "dhcp-overrides-cjnnkh", Namespace: "dhcp-overrides-cf8n7o"
from server for: "STDIN": Get "https://127.0.0.1:35481/apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/dhcp-overrides-cf8n7o/vspheremachinetemplates/dhcp-overrides-cjnnkh": dial tcp 127.0.0.1:35481: connect: connection refused
error when retrieving current configuration of:
Resource: "infrastructure.cluster.x-k8s.io/v1beta1, Resource=vspheremachinetemplates", GroupVersionKind: "infrastructure.cluster.x-k8s.io/v1beta1, Kind=VSphereMachineTemplate"
Name: "dhcp-overrides-cjnnkh-worker", Namespace: "dhcp-overrides-cf8n7o"
from server for: "STDIN": Get "https://127.0.0.1:35481/apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/dhcp-overrides-cf8n7o/vspheremachinetemplates/dhcp-overrides-cjnnkh-worker": dial tcp 127.0.0.1:35481: connect: connection refused
[38;5;10m[SynchronizedAfterSuite] PASSED [1.652 seconds][0m
[38;5;243m------------------------------[0m
[38;5;9m[1mSummarizing 1 Failure:[0m
[38;5;214m[TIMEDOUT][0m [0mDHCPOverrides configuration test [38;5;243mwhen Creating a cluster with DHCPOverrides configured [38;5;214m[1m[It] Only configures the network with the provided nameservers[0m
[38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/dhcp_overrides_test.go:66[0m
[38;5;9m[1mRan 10 of 17 Specs in 3544.936 seconds[0m
[38;5;9m[1mFAIL! - Suite Timeout Elapsed[0m -- The connection to the server localhost:8080 was refused - did you specify the right host or port?
[38;5;10m[1m9 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m1 Pending[0m | [38;5;14m[1m6 Skipped[0m
--- FAIL: TestE2E (3544.94s)
FAIL
Ginkgo ran 1 suite in 1h0m2.077111379s
Test Suite Failed
real 60m2.102s
user 5m57.735s
sys 1m15.748s
make: *** [Makefile:183: e2e] Error 1
Releasing IP claims
ipclaim.ipam.metal3.io "ip-claim-fc9d672d51b088271c31e6fece6e03179f763cf4" deleted
ipclaim.ipam.metal3.io "workload-ip-claim-ffcd5c91f0f4c47a4b11830274343f978a06492f" deleted
vpn
WARNING: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] appears to be a service account. Service account tokens cannot be revoked, but they will expire automatically. To prevent use of the service account token earlier than the expiration, delete or disable the parent service account. To explicitly delete the key associated with the service account use `gcloud iam service-accounts keys delete` instead`.
Revoked credentials:
... skipping 13 lines ...