Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 2h15m |
Revision | release-1.5 |
... skipping 186 lines ... #18 exporting layers 0.4s done #18 writing image sha256:b79351e55ec52d7ecbbfdd474e55aa434dc5dfe44687a82f37af1472803094e7 done #18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done #18 DONE 0.4s #11 [stage-1 1/3] FROM gcr.io/distroless/static:nonroot@sha256:26d07ba1f954c02943786e352bc2c8f4eac719ae2f76a0ced68a953bed93a779 WARNING: failed to get git remote url: fatal: No remote configured to list refs from. Activated service account credentials for: [capv-prow@cluster-api-provider-vsphere.iam.gserviceaccount.com] Copying file:///logs/artifacts/tempContainers/image.tar [Content-Type=application/x-tar]... / [0 files][ 0.0 B/ 73.8 MiB] - - [1 files][ 73.8 MiB/ 73.8 MiB] Operation completed over 1 objects/73.8 MiB. make -C /home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools ginkgo make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/hack/tools' ... skipping 123 lines ... #18 exporting to image #18 exporting layers done #18 writing image sha256:b79351e55ec52d7ecbbfdd474e55aa434dc5dfe44687a82f37af1472803094e7 #18 writing image sha256:b79351e55ec52d7ecbbfdd474e55aa434dc5dfe44687a82f37af1472803094e7 done #18 naming to gcr.io/k8s-staging-cluster-api/capv-manager:e2e done #18 DONE 0.0s WARNING: failed to get git remote url: fatal: No remote configured to list refs from. make release-manifests make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere' make manifests STAGE=release MANIFEST_DIR=out PULL_POLICY=IfNotPresent IMAGE=gcr.io/cluster-api-provider-vsphere/release/manager:v1.6.0 make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere' make generate-flavors FLAVOR_DIR=out make[3]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere' ... skipping 245 lines ... INFO: Waiting for the cluster infrastructure to be provisioned [1mSTEP[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by node-drain-14idd3/node-drain-fppiqo to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist [1mSTEP[0m: Dumping logs from the "node-drain-fppiqo" workload cluster Failed to get logs for machine node-drain-fppiqo-md-0-84fccc67b5-dsbdq, cluster node-drain-14idd3/node-drain-fppiqo: dialing host IP address at : dial tcp :22: connect: connection refused Failed to get logs for machine node-drain-fppiqo-t2gbq, cluster node-drain-14idd3/node-drain-fppiqo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-14idd3" namespace [1mSTEP[0m: Deleting cluster node-drain-14idd3/node-drain-fppiqo [1mSTEP[0m: Deleting cluster node-drain-fppiqo INFO: Waiting for the Cluster node-drain-14idd3/node-drain-fppiqo to be deleted [1mSTEP[0m: Waiting for cluster node-drain-fppiqo to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "node-drain" test spec ... skipping 64 lines ... Discovering machine health check resources Ensuring there is at least 1 Machine that MachineHealthCheck is matching Patching MachineHealthCheck unhealthy condition to one of the nodes INFO: Patching the node condition to the node Waiting for remediation Waiting until the node with unhealthy node condition is remediated E0125 06:38:14.309462 26524 request.go:977] Unexpected error when reading response body: read tcp 10.8.0.4:34566->192.168.6.164:6443: read: connection reset by peer [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "mhc-remediation-rdi8oo" workload cluster Failed to get logs for machine mhc-remediation-rdi8oo-jd98z, cluster mhc-remediation-vo9kte/mhc-remediation-rdi8oo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-rdi8oo-md-0-66f6d9f9f4-cz4ww, cluster mhc-remediation-vo9kte/mhc-remediation-rdi8oo: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-vo9kte" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-vo9kte/mhc-remediation-rdi8oo [1mSTEP[0m: Deleting cluster mhc-remediation-rdi8oo INFO: Waiting for the Cluster mhc-remediation-vo9kte/mhc-remediation-rdi8oo to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-rdi8oo to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 45 lines ... INFO: Waiting for the first control plane machine managed by mhc-remediation-ljyyl0/mhc-remediation-p619cs to be provisioned [1mSTEP[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by mhc-remediation-ljyyl0/mhc-remediation-p619cs to be provisioned [1mSTEP[0m: Waiting for all control plane nodes to exist [1mSTEP[0m: Dumping logs from the "mhc-remediation-p619cs" workload cluster Failed to get logs for machine mhc-remediation-p619cs-47rtw, cluster mhc-remediation-ljyyl0/mhc-remediation-p619cs: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine mhc-remediation-p619cs-md-0-6c99bfd976-7n4dm, cluster mhc-remediation-ljyyl0/mhc-remediation-p619cs: dialing host IP address at : dial tcp :22: connect: connection refused [1mSTEP[0m: Dumping all the Cluster API resources in the "mhc-remediation-ljyyl0" namespace [1mSTEP[0m: Deleting cluster mhc-remediation-ljyyl0/mhc-remediation-p619cs [1mSTEP[0m: Deleting cluster mhc-remediation-p619cs INFO: Waiting for the Cluster mhc-remediation-ljyyl0/mhc-remediation-p619cs to be deleted [1mSTEP[0m: Waiting for cluster mhc-remediation-p619cs to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "mhc-remediation" test spec ... skipping 58 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-wwfrrd-md-0-4zqg5 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-wwfrrd" workload cluster Failed to get logs for machine quick-start-wwfrrd-7j5n8-kt9sg, cluster quick-start-uu4j8f/quick-start-wwfrrd: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine quick-start-wwfrrd-md-0-4zqg5-76798969b6-fnjrm, cluster quick-start-uu4j8f/quick-start-wwfrrd: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-uu4j8f" namespace [1mSTEP[0m: Deleting cluster quick-start-uu4j8f/quick-start-wwfrrd [1mSTEP[0m: Deleting cluster quick-start-wwfrrd INFO: Waiting for the Cluster quick-start-uu4j8f/quick-start-wwfrrd to be deleted [1mSTEP[0m: Waiting for cluster quick-start-wwfrrd to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 50 lines ... INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist [1mSTEP[0m: Checking all the machines controlled by quick-start-22n85y-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "quick-start-22n85y" workload cluster Failed to get logs for machine quick-start-22n85y-8pqxk, cluster quick-start-celhji/quick-start-22n85y: dialing host IP address at 192.168.6.31: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain Failed to get logs for machine quick-start-22n85y-md-0-7dc5647cbb-m4b7v, cluster quick-start-celhji/quick-start-22n85y: dialing host IP address at 192.168.6.133: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain [1mSTEP[0m: Dumping all the Cluster API resources in the "quick-start-celhji" namespace [1mSTEP[0m: Deleting cluster quick-start-celhji/quick-start-22n85y [1mSTEP[0m: Deleting cluster quick-start-22n85y INFO: Waiting for the Cluster quick-start-celhji/quick-start-22n85y to be deleted [1mSTEP[0m: Waiting for cluster quick-start-22n85y to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "quick-start" test spec ... skipping 114 lines ... [1mSTEP[0m: Checking all the machines controlled by md-scale-idc5tk-md-0 are in the "<None>" failure domain INFO: Waiting for the machine pools to be provisioned [1mSTEP[0m: Scaling the MachineDeployment out to 3 INFO: Scaling machine deployment md-scale-i5cohs/md-scale-idc5tk-md-0 from 1 to 3 replicas INFO: Waiting for correct number of replicas to exist [1mSTEP[0m: Dumping logs from the "md-scale-idc5tk" workload cluster Failed to get logs for machine md-scale-idc5tk-jc5mn, cluster md-scale-i5cohs/md-scale-idc5tk: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-scale-idc5tk-md-0-6d49d967c8-gk5j9, cluster md-scale-i5cohs/md-scale-idc5tk: dialing host IP address at : dial tcp :22: connect: connection refused Failed to get logs for machine md-scale-idc5tk-md-0-6d49d967c8-hkhn2, cluster md-scale-i5cohs/md-scale-idc5tk: dialing host IP address at : dial tcp :22: connect: connection refused Failed to get logs for machine md-scale-idc5tk-md-0-6d49d967c8-kppwt, cluster md-scale-i5cohs/md-scale-idc5tk: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-i5cohs" namespace [1mSTEP[0m: Deleting cluster md-scale-i5cohs/md-scale-idc5tk [1mSTEP[0m: Deleting cluster md-scale-idc5tk INFO: Waiting for the Cluster md-scale-i5cohs/md-scale-idc5tk to be deleted [1mSTEP[0m: Waiting for cluster md-scale-idc5tk to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-scale" test spec ... skipping 3 lines ... When testing MachineDeployment scale out/in [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/md_scale_test.go:24[0m [91m[1mShould successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count [It][0m [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/e2e/md_scale.go:71[0m [91mTimed out after 600.000s. Error: Unexpected non-nil/non-zero argument at index 1: <*errors.fundamental>: Machine count does not match existing nodes count[0m /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/machinedeployment_helpers.go:420 [90m------------------------------[0m [0mCluster creation with storage policy[0m [1mshould create a cluster successfully[0m ... skipping 107 lines ... INFO: Waiting for rolling upgrade to start. INFO: Waiting for MachineDeployment rolling upgrade to start INFO: Waiting for rolling upgrade to complete. INFO: Waiting for MachineDeployment rolling upgrade to complete [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "md-rollout-lrzfzi" workload cluster Failed to get logs for machine md-rollout-lrzfzi-fjtcd, cluster md-rollout-1kj6yx/md-rollout-lrzfzi: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 Failed to get logs for machine md-rollout-lrzfzi-md-0-68487dbbcd-5wnxh, cluster md-rollout-1kj6yx/md-rollout-lrzfzi: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "md-rollout-1kj6yx" namespace [1mSTEP[0m: Deleting cluster md-rollout-1kj6yx/md-rollout-lrzfzi [1mSTEP[0m: Deleting cluster md-rollout-lrzfzi INFO: Waiting for the Cluster md-rollout-1kj6yx/md-rollout-lrzfzi to be deleted [1mSTEP[0m: Waiting for cluster md-rollout-lrzfzi to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "md-rollout" test spec ... skipping 62 lines ... INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: Rebasing the Cluster to a ClusterClass with a modified label for MachineDeployments and wait for changes to be applied to the MachineDeployment objects INFO: Waiting for MachineDeployment rollout to complete. INFO: Waiting for MachineDeployment rollout for MachineDeploymentTopology "md-0" (class "quick-start-worker") to complete. [1mSTEP[0m: PASSED! [1mSTEP[0m: Dumping logs from the "clusterclass-changes-s4gtcf" workload cluster Failed to get logs for machine clusterclass-changes-s4gtcf-md-0-wmbl4-55c9cfb454-n5zb9, cluster clusterclass-changes-8f6ima/clusterclass-changes-s4gtcf: dialing host IP address at : dial tcp :22: connect: connection refused Failed to get logs for machine clusterclass-changes-s4gtcf-md-0-wmbl4-6cb987697-77qhm, cluster clusterclass-changes-8f6ima/clusterclass-changes-s4gtcf: dialing host IP address at : dial tcp :22: connect: connection refused Failed to get logs for machine clusterclass-changes-s4gtcf-zw4pk-fxh4z, cluster clusterclass-changes-8f6ima/clusterclass-changes-s4gtcf: running command "cat /var/log/cloud-init-output.log": Process exited with status 1 [1mSTEP[0m: Dumping all the Cluster API resources in the "clusterclass-changes-8f6ima" namespace [1mSTEP[0m: Deleting cluster clusterclass-changes-8f6ima/clusterclass-changes-s4gtcf [1mSTEP[0m: Deleting cluster clusterclass-changes-s4gtcf INFO: Waiting for the Cluster clusterclass-changes-8f6ima/clusterclass-changes-s4gtcf to be deleted [1mSTEP[0m: Waiting for cluster clusterclass-changes-s4gtcf to be deleted [1mSTEP[0m: Deleting namespace used for hosting the "clusterclass-changes" test spec ... skipping 110 lines ... INFO: Waiting for control plane to be ready INFO: Waiting for control plane quick-start-i2s0n4/quick-start-rq9jc0 to be ready (implies underlying nodes to be ready as well) [1mSTEP[0m: Waiting for the control plane to be ready [1mSTEP[0m: Checking all the the control plane machines are in the expected failure domains INFO: Waiting for the machine deployments to be provisioned [1mSTEP[0m: Waiting for the workload nodes to exist {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2023-01-25T08:14:55Z"} ++ early_exit_handler ++ '[' -n 158 ']' ++ kill -TERM 158 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 24 lines ... Cluster Creation using Cluster API quick-start test [PR-Blocking] [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/capv_quick_start_test.go:26[0m [91m[1mShould create a workload cluster [It][0m [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/e2e/quick_start.go:78[0m [91mTimed out after 600.000s. Error: Unexpected non-nil/non-zero argument at index 1: <*url.Error>: &url.Error{Op:"Get", URL:"https://127.0.0.1:36777/apis/cluster.x-k8s.io/v1beta1/namespaces/quick-start-i2s0n4/machinesets?labelSelector=cluster.x-k8s.io%2Fcluster-name%3Dquick-start-rq9jc0%2Ccluster.x-k8s.io%2Fdeployment-name%3Dquick-start-rq9jc0-md-0", Err:(*net.OpError)(0xc0011140a0)}[0m /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/machinedeployment_helpers.go:129 [90m------------------------------[0m [0mHardware version upgrade[0m [1mcreates a cluster with VM hardware versions upgraded[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/hardware_upgrade_test.go:57[0m ... skipping 2 lines ... [91m[1m• Failure in Spec Setup (BeforeEach) [60.004 seconds][0m Hardware version upgrade [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/hardware_upgrade_test.go:43[0m [91m[1mcreates a cluster with VM hardware versions upgraded [BeforeEach][0m [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/hardware_upgrade_test.go:57[0m [91mFailed to get controller-runtime client Unexpected error: <*url.Error | 0xc0012411d0>: { Op: "Get", URL: "https://127.0.0.1:36777/api?timeout=32s", Err: <*net.OpError | 0xc002bc5630>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 21 lines ... [91m[1m• Failure in Spec Setup (BeforeEach) [60.007 seconds][0m Label nodes with ESXi host info [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/node_labeling_test.go:38[0m [91m[1mcreates a workload cluster whose nodes have the ESXi host info [BeforeEach][0m [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-vsphere/test/e2e/node_labeling_test.go:52[0m [91mFailed to get controller-runtime client Unexpected error: <*url.Error | 0xc001b06b10>: { Op: "Get", URL: "https://127.0.0.1:36777/api?timeout=32s", Err: <*net.OpError | 0xc002b1f6d0>{ Op: "dial", Net: "tcp", Source: nil, ... skipping 11 lines ... Get "https://127.0.0.1:36777/api?timeout=32s": dial tcp 127.0.0.1:36777: connect: connection refused occurred[0m /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.2/framework/cluster_proxy.go:188 [90m------------------------------[0m [1mSTEP[0m: Cleaning up the vSphere session {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:254","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2023-01-25T08:29:55Z"} {"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:256","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2023-01-25T08:29:55Z"}