Recent runs || View in Spyglass
PR | jackfrancis: Update default k8s version to v1.25 for testing |
Result | FAILURE |
Tests | 2 failed / 21 succeeded |
Started | |
Elapsed | 1h55m |
Revision | b566d645b4d61008a2395038f1c761b5c5835b42 |
Refs |
3088 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sKCP\supgrade\sin\sa\sHA\scluster\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
[FAILED] Timed out after 3600.000s. Timed out waiting for all control-plane machines in Cluster k8s-upgrade-and-conformance-nsdl74/k8s-upgrade-and-conformance-2l4n9w to be upgraded to kubernetes version v1.26.1 The function passed to Eventually returned the following error: old nodes remain <*errors.fundamental | 0xc0023d95c0>: { msg: "old nodes remain", stack: [0x2b6487c, 0x154d085, 0x154c57c, 0x196b29a, 0x196c657, 0x196964d, 0x2b646ec, 0x2b5bf55, 0x3415208, 0x194637b, 0x195a958, 0x14da741], } In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machine_helpers.go:151 @ 01/30/23 20:20:09.447 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit.e2e_suite.1.xml
kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-2l4n9w-mp-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-2l4n9w-md-0 created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-2l4n9w created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-2l4n9w-md-0 created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-2l4n9w-mp-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-2l4n9w-control-plane created azurecluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-2l4n9w created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created azuremachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-2l4n9w-mp-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-2l4n9w-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-2l4n9w-md-0 created felixconfiguration.crd.projectcalico.org/default created > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/30/23 19:10:31.808 INFO: "" started at Mon, 30 Jan 2023 19:10:31 UTC on Ginkgo node 5 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/30/23 19:10:31.868 (60ms) > Enter [BeforeEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:84 @ 01/30/23 19:10:31.868 STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/30/23 19:10:31.868 INFO: Creating namespace k8s-upgrade-and-conformance-nsdl74 INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-nsdl74" < Exit [BeforeEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:84 @ 01/30/23 19:10:31.899 (31ms) > Enter [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:118 @ 01/30/23 19:10:31.899 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:119 @ 01/30/23 19:10:31.9 INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-2l4n9w" using the "upgrades" template (Kubernetes v1.25.6, 3 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-2l4n9w --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 3 --worker-machine-count 0 --flavor upgrades INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/30/23 19:10:35.257 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/30/23 19:12:25.358 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/30/23 19:12:25.358 Jan 30 19:14:44.640: INFO: getting history for release projectcalico Jan 30 19:14:44.675: INFO: Release projectcalico does not exist, installing it Jan 30 19:14:45.460: INFO: creating 1 resource(s) Jan 30 19:14:45.525: INFO: creating 1 resource(s) Jan 30 19:14:45.576: INFO: creating 1 resource(s) Jan 30 19:14:45.629: INFO: creating 1 resource(s) Jan 30 19:14:45.682: INFO: creating 1 resource(s) Jan 30 19:14:45.741: INFO: creating 1 resource(s) Jan 30 19:14:45.856: INFO: creating 1 resource(s) Jan 30 19:14:45.929: INFO: creating 1 resource(s) Jan 30 19:14:45.976: INFO: creating 1 resource(s) Jan 30 19:14:46.022: INFO: creating 1 resource(s) Jan 30 19:14:46.072: INFO: creating 1 resource(s) Jan 30 19:14:46.117: INFO: creating 1 resource(s) Jan 30 19:14:46.176: INFO: creating 1 resource(s) Jan 30 19:14:46.224: INFO: creating 1 resource(s) Jan 30 19:14:46.276: INFO: creating 1 resource(s) Jan 30 19:14:46.339: INFO: creating 1 resource(s) Jan 30 19:14:46.408: INFO: creating 1 resource(s) Jan 30 19:14:46.462: INFO: creating 1 resource(s) Jan 30 19:14:46.529: INFO: creating 1 resource(s) Jan 30 19:14:46.649: INFO: creating 1 resource(s) Jan 30 19:14:46.960: INFO: creating 1 resource(s) Jan 30 19:14:47.053: INFO: Clearing discovery cache Jan 30 19:14:47.053: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 30 19:14:49.783: INFO: creating 1 resource(s) Jan 30 19:14:50.196: INFO: creating 6 resource(s) Jan 30 19:14:50.734: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/30/23 19:14:50.982 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/30/23 19:14:51.119 Jan 30 19:14:51.119: INFO: starting to wait for deployment to become available Jan 30 19:15:01.188: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.068698558s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/30/23 19:15:02.63 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/30/23 19:15:42.938 Jan 30 19:15:42.938: INFO: starting to wait for deployment to become available Jan 30 19:16:33.147: INFO: Deployment calico-system/calico-kube-controllers is now available, took 50.209927806s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/30/23 19:16:33.454 Jan 30 19:16:33.454: INFO: starting to wait for deployment to become available Jan 30 19:16:33.493: INFO: Deployment calico-system/calico-typha is now available, took 39.68167ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/30/23 19:16:33.493 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/30/23 19:16:33.773 Jan 30 19:16:33.773: INFO: starting to wait for deployment to become available Jan 30 19:16:53.884: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.110487245s INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-nsdl74/k8s-upgrade-and-conformance-2l4n9w-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:133 @ 01/30/23 19:16:53.917 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:65 @ 01/30/23 19:16:53.927 Jan 30 19:16:53.992: INFO: getting history for release azuredisk-csi-driver-oot Jan 30 19:16:54.028: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 30 19:16:56.789: INFO: creating 1 resource(s) Jan 30 19:16:56.925: INFO: creating 18 resource(s) Jan 30 19:16:57.300: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:75 @ 01/30/23 19:16:57.301 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:92 @ 01/30/23 19:16:57.45 Jan 30 19:16:57.450: INFO: starting to wait for deployment to become available Jan 30 19:18:08.935: INFO: Deployment kube-system/csi-azuredisk-controller is now available, took 1m11.484553958s INFO: Waiting for control plane to be ready INFO: Waiting for the remaining control plane machines managed by k8s-upgrade-and-conformance-nsdl74/k8s-upgrade-and-conformance-2l4n9w-control-plane to be provisioned STEP: Waiting for all control plane nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:96 @ 01/30/23 19:18:08.965 INFO: Waiting for control plane k8s-upgrade-and-conformance-nsdl74/k8s-upgrade-and-conformance-2l4n9w-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:165 @ 01/30/23 19:20:09.14 STEP: Checking all the control plane machines are in the expected failure domains - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/controlplane_helpers.go:196 @ 01/30/23 19:20:09.15 INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinedeployment_helpers.go:102 @ 01/30/23 19:20:09.197 STEP: Checking all the machines controlled by k8s-upgrade-and-conformance-2l4n9w-md-0 are in the "<None>" failure domain - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/30/23 19:20:09.216 INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machinepool_helpers.go:79 @ 01/30/23 19:20:09.307 STEP: Upgrading the Kubernetes control-plane - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:160 @ 01/30/23 19:20:09.315 INFO: Patching the new kubernetes version to KCP INFO: Waiting for control-plane machines to have the upgraded kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.26.1 - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/30/23 19:20:09.446 [FAILED] Timed out after 3600.000s. Timed out waiting for all control-plane machines in Cluster k8s-upgrade-and-conformance-nsdl74/k8s-upgrade-and-conformance-2l4n9w to be upgraded to kubernetes version v1.26.1 The function passed to Eventually returned the following error: old nodes remain <*errors.fundamental | 0xc0023d95c0>: { msg: "old nodes remain", stack: [0x2b6487c, 0x154d085, 0x154c57c, 0x196b29a, 0x196c657, 0x196964d, 0x2b646ec, 0x2b5bf55, 0x3415208, 0x194637b, 0x195a958, 0x14da741], } In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/machine_helpers.go:151 @ 01/30/23 20:20:09.447 < Exit [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:118 @ 01/30/23 20:20:09.447 (1h9m37.547s) > Enter [AfterEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:242 @ 01/30/23 20:20:09.447 STEP: Dumping logs from the "k8s-upgrade-and-conformance-2l4n9w" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/30/23 20:20:09.447 Jan 30 20:20:09.447: INFO: Dumping workload cluster k8s-upgrade-and-conformance-nsdl74/k8s-upgrade-and-conformance-2l4n9w logs Jan 30 20:20:09.493: INFO: Collecting logs for Linux node k8s-upgrade-and-conformance-2l4n9w-control-plane-68t4w in cluster k8s-upgrade-and-conformance-2l4n9w in namespace k8s-upgrade-and-conformance-nsdl74 Jan 30 20:20:31.043: INFO: Collecting boot logs for AzureMachine k8s-upgrade-and-conformance-2l4n9w-control-plane-68t4w Jan 30 20:20:32.358: INFO: Collecting logs for Linux node k8s-upgrade-and-conformance-2l4n9w-control-plane-bgqq5 in cluster k8s-upgrade-and-conformance-2l4n9w in namespace k8s-upgrade-and-conformance-nsdl74 Jan 30 20:20:51.240: INFO: Collecting boot logs for AzureMachine k8s-upgrade-and-conformance-2l4n9w-control-plane-bgqq5 Jan 30 20:20:51.737: INFO: Collecting logs for Linux node k8s-upgrade-and-conformance-2l4n9w-control-plane-8rgdq in cluster k8s-upgrade-and-conformance-2l4n9w in namespace k8s-upgrade-and-conformance-nsdl74 Jan 30 20:21:04.673: INFO: Collecting boot logs for AzureMachine k8s-upgrade-and-conformance-2l4n9w-control-plane-8rgdq Jan 30 20:21:05.145: INFO: Collecting logs for Linux node k8s-upgrade-and-conformance-2l4n9w-control-plane-49xp4 in cluster k8s-upgrade-and-conformance-2l4n9w in namespace k8s-upgrade-and-conformance-nsdl74 Jan 30 20:21:17.924: INFO: Collecting boot logs for AzureMachine k8s-upgrade-and-conformance-2l4n9w-control-plane-49xp4 Jan 30 20:21:18.380: INFO: Dumping workload cluster k8s-upgrade-and-conformance-nsdl74/k8s-upgrade-and-conformance-2l4n9w kube-system pod logs Jan 30 20:21:18.919: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-5bd679ffdb-54l76, container calico-apiserver Jan 30 20:21:18.919: INFO: Describing Pod calico-apiserver/calico-apiserver-5bd679ffdb-54l76 Jan 30 20:21:19.005: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-5bd679ffdb-r9pqz, container calico-apiserver Jan 30 20:21:19.005: INFO: Describing Pod calico-apiserver/calico-apiserver-5bd679ffdb-r9pqz Jan 30 20:21:19.083: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-5f9dc85578-qzlqt, container calico-kube-controllers Jan 30 20:21:19.083: INFO: Describing Pod calico-system/calico-kube-controllers-5f9dc85578-qzlqt Jan 30 20:21:19.167: INFO: Creating log watcher for controller calico-system/calico-node-2gbx9, container calico-node Jan 30 20:21:19.167: INFO: Describing Pod calico-system/calico-node-2gbx9 Jan 30 20:21:19.256: INFO: Creating log watcher for controller calico-system/calico-node-4cp6z, container calico-node Jan 30 20:21:19.256: INFO: Describing Pod calico-system/calico-node-4cp6z Jan 30 20:21:19.339: INFO: Describing Pod calico-system/calico-node-mxcrz Jan 30 20:21:19.339: INFO: Creating log watcher for controller calico-system/calico-node-mxcrz, container calico-node Jan 30 20:21:19.678: INFO: Creating log watcher for controller calico-system/calico-node-pn8pc, container calico-node Jan 30 20:21:19.678: INFO: Describing Pod calico-system/calico-node-pn8pc Jan 30 20:21:20.078: INFO: Describing Pod calico-system/calico-typha-745fb8d48d-6rzws Jan 30 20:21:20.078: INFO: Creating log watcher for controller calico-system/calico-typha-745fb8d48d-6rzws, container calico-typha Jan 30 20:21:20.477: INFO: Creating log watcher for controller calico-system/calico-typha-745fb8d48d-7nxhj, container calico-typha Jan 30 20:21:20.477: INFO: Describing Pod calico-system/calico-typha-745fb8d48d-7nxhj Jan 30 20:21:20.882: INFO: Creating log watcher for controller calico-system/csi-node-driver-8v2x9, container calico-csi Jan 30 20:21:20.882: INFO: Creating log watcher for controller calico-system/csi-node-driver-8v2x9, container csi-node-driver-registrar Jan 30 20:21:20.883: INFO: Describing Pod calico-system/csi-node-driver-8v2x9 Jan 30 20:21:21.278: INFO: Describing Pod calico-system/csi-node-driver-dnbbb Jan 30 20:21:21.278: INFO: Creating log watcher for controller calico-system/csi-node-driver-dnbbb, container calico-csi Jan 30 20:21:21.278: INFO: Creating log watcher for controller calico-system/csi-node-driver-dnbbb, container csi-node-driver-registrar Jan 30 20:21:21.679: INFO: Describing Pod calico-system/csi-node-driver-fps25 Jan 30 20:21:21.679: INFO: Creating log watcher for controller calico-system/csi-node-driver-fps25, container calico-csi Jan 30 20:21:21.679: INFO: Creating log watcher for controller calico-system/csi-node-driver-fps25, container csi-node-driver-registrar Jan 30 20:21:22.078: INFO: Describing Pod calico-system/csi-node-driver-slv56 Jan 30 20:21:22.078: INFO: Creating log watcher for controller calico-system/csi-node-driver-slv56, container calico-csi Jan 30 20:21:22.078: INFO: Creating log watcher for controller calico-system/csi-node-driver-slv56, container csi-node-driver-registrar Jan 30 20:21:22.479: INFO: Describing Pod kube-system/coredns-565d847f94-pbg9q Jan 30 20:21:22.479: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-pbg9q, container coredns Jan 30 20:21:22.878: INFO: Describing Pod kube-system/coredns-565d847f94-szdjb Jan 30 20:21:22.878: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-szdjb, container coredns Jan 30 20:21:23.277: INFO: Describing Pod kube-system/csi-azuredisk-controller-6b9657f4f7-v992c Jan 30 20:21:23.277: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-v992c, container csi-snapshotter Jan 30 20:21:23.277: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-v992c, container csi-provisioner Jan 30 20:21:23.277: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-v992c, container liveness-probe Jan 30 20:21:23.277: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-v992c, container azuredisk Jan 30 20:21:23.278: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-v992c, container csi-resizer Jan 30 20:21:23.278: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-6b9657f4f7-v992c, container csi-attacher Jan 30 20:21:23.679: INFO: Describing Pod kube-system/csi-azuredisk-node-c7l7n Jan 30 20:21:23.679: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-c7l7n, container node-driver-registrar Jan 30 20:21:23.679: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-c7l7n, container azuredisk Jan 30 20:21:23.679: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-c7l7n, container liveness-probe Jan 30 20:21:24.082: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-fq7gm, container node-driver-registrar Jan 30 20:21:24.082: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-fq7gm, container azuredisk Jan 30 20:21:24.082: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-fq7gm, container liveness-probe Jan 30 20:21:24.082: INFO: Describing Pod kube-system/csi-azuredisk-node-fq7gm Jan 30 20:21:24.478: INFO: Describing Pod kube-system/csi-azuredisk-node-m9t4d Jan 30 20:21:24.478: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-m9t4d, container node-driver-registrar Jan 30 20:21:24.478: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-m9t4d, container azuredisk Jan 30 20:21:24.478: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-m9t4d, container liveness-probe Jan 30 20:21:24.879: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-rtdwq, container node-driver-registrar Jan 30 20:21:24.879: INFO: Describing Pod kube-system/csi-azuredisk-node-rtdwq Jan 30 20:21:24.879: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-rtdwq, container azuredisk Jan 30 20:21:24.879: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-rtdwq, container liveness-probe Jan 30 20:21:25.280: INFO: Describing Pod kube-system/etcd-k8s-upgrade-and-conformance-2l4n9w-control-plane-49xp4 Jan 30 20:21:25.280: INFO: Creating log watcher for controller kube-system/etcd-k8s-upgrade-and-conformance-2l4n9w-control-plane-49xp4, container etcd Jan 30 20:21:25.677: INFO: Describing Pod kube-system/etcd-k8s-upgrade-and-conformance-2l4n9w-control-plane-68t4w Jan 30 20:21:25.677: INFO: Creating log watcher for controller kube-system/etcd-k8s-upgrade-and-conformance-2l4n9w-control-plane-68t4w, container etcd Jan 30 20:21:26.077: INFO: Creating log watcher for controller kube-system/etcd-k8s-upgrade-and-conformance-2l4n9w-control-plane-8rgdq, container etcd Jan 30 20:21:26.077: INFO: Describing Pod kube-system/etcd-k8s-upgrade-and-conformance-2l4n9w-control-plane-8rgdq Jan 30 20:21:26.479: INFO: Describing Pod kube-system/etcd-k8s-upgrade-and-conformance-2l4n9w-control-plane-bgqq5 Jan 30 20:21:26.479: INFO: Creating log watcher for controller kube-system/etcd-k8s-upgrade-and-conformance-2l4n9w-control-plane-bgqq5, container etcd Jan 30 20:21:26.879: INFO: Describing Pod kube-system/kube-apiserver-k8s-upgrade-and-conformance-2l4n9w-control-plane-49xp4 Jan 30 20:21:26.879: INFO: Creating log watcher for controller kube-system/kube-apiserver-k8s-upgrade-and-conformance-2l4n9w-control-plane-49xp4, container kube-apiserver Jan 30 20:21:27.277: INFO: Creating log watcher for controller kube-system/kube-apiserver-k8s-upgrade-and-conformance-2l4n9w-control-plane-68t4w, container kube-apiserver Jan 30 20:21:27.277: INFO: Describing Pod kube-system/kube-apiserver-k8s-upgrade-and-conformance-2l4n9w-control-plane-68t4w Jan 30 20:21:27.682: INFO: Describing Pod kube-system/kube-apiserver-k8s-upgrade-and-conformance-2l4n9w-control-plane-8rgdq Jan 30 20:21:27.682: INFO: Creating log watcher for controller kube-system/kube-apiserver-k8s-upgrade-and-conformance-2l4n9w-control-plane-8rgdq, container kube-apiserver Jan 30 20:21:28.079: INFO: Describing Pod kube-system/kube-apiserver-k8s-upgrade-and-conformance-2l4n9w-control-plane-bgqq5 Jan 30 20:21:28.079: INFO: Creating log watcher for controller kube-system/kube-apiserver-k8s-upgrade-and-conformance-2l4n9w-control-plane-bgqq5, container kube-apiserver Jan 30 20:21:28.477: INFO: Creating log watcher for controller kube-system/kube-controller-manager-k8s-upgrade-and-conformance-2l4n9w-control-plane-49xp4, container kube-controller-manager Jan 30 20:21:28.477: INFO: Describing Pod kube-system/kube-controller-manager-k8s-upgrade-and-conformance-2l4n9w-control-plane-49xp4 Jan 30 20:21:28.878: INFO: Describing Pod kube-system/kube-controller-manager-k8s-upgrade-and-conformance-2l4n9w-control-plane-68t4w Jan 30 20:21:28.878: INFO: Creating log watcher for controller kube-system/kube-controller-manager-k8s-upgrade-and-conformance-2l4n9w-control-plane-68t4w, container kube-controller-manager Jan 30 20:21:29.282: INFO: Describing Pod kube-system/kube-controller-manager-k8s-upgrade-and-conformance-2l4n9w-control-plane-8rgdq Jan 30 20:21:29.282: INFO: Creating log watcher for controller kube-system/kube-controller-manager-k8s-upgrade-and-conformance-2l4n9w-control-plane-8rgdq, container kube-controller-manager Jan 30 20:21:29.678: INFO: Describing Pod kube-system/kube-controller-manager-k8s-upgrade-and-conformance-2l4n9w-control-plane-bgqq5 Jan 30 20:21:29.678: INFO: Creating log watcher for controller kube-system/kube-controller-manager-k8s-upgrade-and-conformance-2l4n9w-control-plane-bgqq5, container kube-controller-manager Jan 30 20:21:30.077: INFO: Creating log watcher for controller kube-system/kube-proxy-9cm6v, container kube-proxy Jan 30 20:21:30.077: INFO: Describing Pod kube-system/kube-proxy-9cm6v Jan 30 20:21:30.476: INFO: Describing Pod kube-system/kube-proxy-gdmqr Jan 30 20:21:30.476: INFO: Creating log watcher for controller kube-system/kube-proxy-gdmqr, container kube-proxy Jan 30 20:21:30.878: INFO: Describing Pod kube-system/kube-proxy-q82c8 Jan 30 20:21:30.878: INFO: Creating log watcher for controller kube-system/kube-proxy-q82c8, container kube-proxy Jan 30 20:21:31.279: INFO: Describing Pod kube-system/kube-proxy-zcp8h Jan 30 20:21:31.279: INFO: Creating log watcher for controller kube-system/kube-proxy-zcp8h, container kube-proxy Jan 30 20:21:31.677: INFO: Describing Pod kube-system/kube-scheduler-k8s-upgrade-and-conformance-2l4n9w-control-plane-49xp4 Jan 30 20:21:31.677: INFO: Creating log watcher for controller kube-system/kube-scheduler-k8s-upgrade-and-conformance-2l4n9w-control-plane-49xp4, container kube-scheduler Jan 30 20:21:32.078: INFO: Describing Pod kube-system/kube-scheduler-k8s-upgrade-and-conformance-2l4n9w-control-plane-68t4w Jan 30 20:21:32.078: INFO: Creating log watcher for controller kube-system/kube-scheduler-k8s-upgrade-and-conformance-2l4n9w-control-plane-68t4w, container kube-scheduler Jan 30 20:21:32.477: INFO: Describing Pod kube-system/kube-scheduler-k8s-upgrade-and-conformance-2l4n9w-control-plane-8rgdq Jan 30 20:21:32.477: INFO: Creating log watcher for controller kube-system/kube-scheduler-k8s-upgrade-and-conformance-2l4n9w-control-plane-8rgdq, container kube-scheduler Jan 30 20:21:32.883: INFO: Creating log watcher for controller kube-system/kube-scheduler-k8s-upgrade-and-conformance-2l4n9w-control-plane-bgqq5, container kube-scheduler Jan 30 20:21:32.883: INFO: Describing Pod kube-system/kube-scheduler-k8s-upgrade-and-conformance-2l4n9w-control-plane-bgqq5 Jan 30 20:21:33.276: INFO: Fetching kube-system pod logs took 14.896818277s Jan 30 20:21:33.276: INFO: Dumping workload cluster k8s-upgrade-and-conformance-nsdl74/k8s-upgrade-and-conformance-2l4n9w Azure activity log Jan 30 20:21:33.278: INFO: Creating log watcher for controller tigera-operator/tigera-operator-64db64cb98-v869l, container tigera-operator Jan 30 20:21:33.279: INFO: Describing Pod tigera-operator/tigera-operator-64db64cb98-v869l Jan 30 20:21:38.048: INFO: Fetching activity logs took 4.771080881s STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-nsdl74" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/30/23 20:21:38.048 STEP: Deleting cluster k8s-upgrade-and-conformance-nsdl74/k8s-upgrade-and-conformance-2l4n9w - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/30/23 20:21:38.396 STEP: Deleting cluster k8s-upgrade-and-conformance-2l4n9w - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/30/23 20:21:38.412 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-nsdl74/k8s-upgrade-and-conformance-2l4n9w to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-2l4n9w to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/30/23 20:21:38.424 [FAILED] Timed out after 1800.001s. Expected <bool>: false to be true In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:176 @ 01/30/23 20:51:38.425 < Exit [AfterEach] Running KCP upgrade in a HA cluster [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:242 @ 01/30/23 20:51:38.425 (31m28.979s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/30/23 20:51:38.425 Jan 30 20:51:38.425: INFO: FAILED! Jan 30 20:51:38.426: INFO: Cleaning up after "Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest" spec STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/30/23 20:51:38.426 INFO: "Should create and upgrade a workload cluster and eventually run kubetest" started at Mon, 30 Jan 2023 20:52:34 UTC on Ginkgo node 5 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/30/23 20:52:34.035 (55.61s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sworkload\scluster\supgrade\sspec\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\seventually\srun\skubetest$'
[FAILED] Timed out after 195.011s. Expected success, but got an error: <*errors.withStack | 0xc000c10180>: { error: <*errors.withMessage | 0xc0002242a0>{ cause: <*url.Error | 0xc00108e900>{ Op: "Get", URL: "https://k8s-upgrade-and-conformance-0cvexn-eabd134b.canadacentral.cloudapp.azure.com:6443/version", Err: <*net.OpError | 0xc000499450>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000c76c00>{ IP: [20, 175, 227, 160], Port: 6443, Zone: "", }, Err: <*net.timeoutError | 0x5d04820>{}, }, }, msg: "Kubernetes cluster unreachable", }, stack: [0x3547885, 0x35ea4fb, 0x36428b2, 0x154d085, 0x154c57c, 0x196b29a, 0x196c657, 0x196964d, 0x3642149, 0x3632654, 0x3635637, 0x2ff0810, 0x341498e, 0x194637b, 0x195a958, 0x14da741], } Kubernetes cluster unreachable: Get "https://k8s-upgrade-and-conformance-0cvexn-eabd134b.canadacentral.cloudapp.azure.com:6443/version": dial tcp 20.175.227.160:6443: i/o timeout In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:949 @ 01/30/23 19:15:40.394from junit.e2e_suite.1.xml
kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-0cvexn-mp-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-0cvexn-md-0 created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-0cvexn created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-0cvexn-md-0 created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-0cvexn-mp-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-0cvexn-control-plane created azurecluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-0cvexn created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity-sp created azuremachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-0cvexn-mp-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-0cvexn-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-0cvexn-md-0 created Failed to get logs for Machine k8s-upgrade-and-conformance-0cvexn-md-0-78764f796c-57htg, Cluster k8s-upgrade-and-conformance-lsuhgv/k8s-upgrade-and-conformance-0cvexn: [dialing from control plane to target node at k8s-upgrade-and-conformance-0cvexn-md-0-fprzc: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil] Failed to get logs for Machine k8s-upgrade-and-conformance-0cvexn-md-0-78764f796c-wt87z, Cluster k8s-upgrade-and-conformance-lsuhgv/k8s-upgrade-and-conformance-0cvexn: Unable to collect VM Boot Diagnostic logs: AzureMachine provider ID is nil > Enter [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/30/23 19:10:31.807 INFO: "" started at Mon, 30 Jan 2023 19:10:31 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [BeforeEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:52 @ 01/30/23 19:10:31.871 (65ms) > Enter [BeforeEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:84 @ 01/30/23 19:10:31.871 STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/30/23 19:10:31.872 INFO: Creating namespace k8s-upgrade-and-conformance-lsuhgv INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-lsuhgv" < Exit [BeforeEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:84 @ 01/30/23 19:10:31.911 (40ms) > Enter [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:118 @ 01/30/23 19:10:31.911 STEP: Creating a workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:119 @ 01/30/23 19:10:31.911 INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-0cvexn" using the "upgrades" template (Kubernetes v1.25.6, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-0cvexn --infrastructure (default) --kubernetes-version v1.25.6 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/cluster_helpers.go:134 @ 01/30/23 19:10:35.258 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/30/23 19:12:25.356 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:112 @ 01/30/23 19:12:25.356 [FAILED] Timed out after 195.011s. Expected success, but got an error: <*errors.withStack | 0xc000c10180>: { error: <*errors.withMessage | 0xc0002242a0>{ cause: <*url.Error | 0xc00108e900>{ Op: "Get", URL: "https://k8s-upgrade-and-conformance-0cvexn-eabd134b.canadacentral.cloudapp.azure.com:6443/version", Err: <*net.OpError | 0xc000499450>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000c76c00>{ IP: [20, 175, 227, 160], Port: 6443, Zone: "", }, Err: <*net.timeoutError | 0x5d04820>{}, }, }, msg: "Kubernetes cluster unreachable", }, stack: [0x3547885, 0x35ea4fb, 0x36428b2, 0x154d085, 0x154c57c, 0x196b29a, 0x196c657, 0x196964d, 0x3642149, 0x3632654, 0x3635637, 0x2ff0810, 0x341498e, 0x194637b, 0x195a958, 0x14da741], } Kubernetes cluster unreachable: Get "https://k8s-upgrade-and-conformance-0cvexn-eabd134b.canadacentral.cloudapp.azure.com:6443/version": dial tcp 20.175.227.160:6443: i/o timeout In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:949 @ 01/30/23 19:15:40.394 < Exit [It] Should create and upgrade a workload cluster and eventually run kubetest - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:118 @ 01/30/23 19:15:40.394 (5m8.483s) > Enter [AfterEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:242 @ 01/30/23 19:15:40.394 STEP: Dumping logs from the "k8s-upgrade-and-conformance-0cvexn" workload cluster - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/30/23 19:15:40.394 Jan 30 19:15:40.394: INFO: Dumping workload cluster k8s-upgrade-and-conformance-lsuhgv/k8s-upgrade-and-conformance-0cvexn logs Jan 30 19:15:40.475: INFO: Collecting logs for Linux node k8s-upgrade-and-conformance-0cvexn-control-plane-nvw4w in cluster k8s-upgrade-and-conformance-0cvexn in namespace k8s-upgrade-and-conformance-lsuhgv Jan 30 19:15:46.623: INFO: Collecting boot logs for AzureMachine k8s-upgrade-and-conformance-0cvexn-control-plane-nvw4w Jan 30 19:15:47.763: INFO: Collecting logs for Linux node k8s-upgrade-and-conformance-0cvexn-md-0-fprzc in cluster k8s-upgrade-and-conformance-0cvexn in namespace k8s-upgrade-and-conformance-lsuhgv Jan 30 19:16:49.923: INFO: Collecting boot logs for AzureMachine k8s-upgrade-and-conformance-0cvexn-md-0-fprzc Jan 30 19:16:49.965: INFO: Collecting logs for Linux node k8s-upgrade-and-conformance-0cvexn-md-0-g5c4c in cluster k8s-upgrade-and-conformance-0cvexn in namespace k8s-upgrade-and-conformance-lsuhgv Jan 30 19:17:41.516: INFO: Collecting boot logs for AzureMachine k8s-upgrade-and-conformance-0cvexn-md-0-g5c4c Jan 30 19:17:41.602: INFO: Dumping workload cluster k8s-upgrade-and-conformance-lsuhgv/k8s-upgrade-and-conformance-0cvexn kube-system pod logs Jan 30 19:17:42.010: INFO: Describing Pod kube-system/coredns-565d847f94-848p6 Jan 30 19:17:42.010: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-848p6, container coredns Jan 30 19:17:42.082: INFO: Creating log watcher for controller kube-system/coredns-565d847f94-vjk9x, container coredns Jan 30 19:17:42.082: INFO: Describing Pod kube-system/coredns-565d847f94-vjk9x Jan 30 19:17:42.153: INFO: Creating log watcher for controller kube-system/etcd-k8s-upgrade-and-conformance-0cvexn-control-plane-nvw4w, container etcd Jan 30 19:17:42.153: INFO: Describing Pod kube-system/etcd-k8s-upgrade-and-conformance-0cvexn-control-plane-nvw4w Jan 30 19:17:42.225: INFO: Creating log watcher for controller kube-system/kube-apiserver-k8s-upgrade-and-conformance-0cvexn-control-plane-nvw4w, container kube-apiserver Jan 30 19:17:42.225: INFO: Describing Pod kube-system/kube-apiserver-k8s-upgrade-and-conformance-0cvexn-control-plane-nvw4w Jan 30 19:17:42.297: INFO: Creating log watcher for controller kube-system/kube-controller-manager-k8s-upgrade-and-conformance-0cvexn-control-plane-nvw4w, container kube-controller-manager Jan 30 19:17:42.297: INFO: Describing Pod kube-system/kube-controller-manager-k8s-upgrade-and-conformance-0cvexn-control-plane-nvw4w Jan 30 19:17:42.373: INFO: Creating log watcher for controller kube-system/kube-proxy-7rhm6, container kube-proxy Jan 30 19:17:42.373: INFO: Describing Pod kube-system/kube-proxy-7rhm6 Jan 30 19:17:42.771: INFO: Describing Pod kube-system/kube-proxy-hv7vf Jan 30 19:17:42.771: INFO: Creating log watcher for controller kube-system/kube-proxy-hv7vf, container kube-proxy Jan 30 19:17:42.821: INFO: Error starting logs stream for pod kube-system/kube-proxy-hv7vf, container kube-proxy: the server could not find the requested resource ( pods/log kube-proxy-hv7vf) Jan 30 19:17:43.171: INFO: Fetching kube-system pod logs took 1.568560177s Jan 30 19:17:43.171: INFO: Dumping workload cluster k8s-upgrade-and-conformance-lsuhgv/k8s-upgrade-and-conformance-0cvexn Azure activity log Jan 30 19:17:43.171: INFO: Creating log watcher for controller kube-system/kube-scheduler-k8s-upgrade-and-conformance-0cvexn-control-plane-nvw4w, container kube-scheduler Jan 30 19:17:43.171: INFO: Describing Pod kube-system/kube-scheduler-k8s-upgrade-and-conformance-0cvexn-control-plane-nvw4w Jan 30 19:17:45.126: INFO: Fetching activity logs took 1.955042538s STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-lsuhgv" namespace - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/30/23 19:17:45.126 STEP: Deleting cluster k8s-upgrade-and-conformance-lsuhgv/k8s-upgrade-and-conformance-0cvexn - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/30/23 19:17:45.845 STEP: Deleting cluster k8s-upgrade-and-conformance-0cvexn - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/30/23 19:17:45.873 INFO: Waiting for the Cluster k8s-upgrade-and-conformance-lsuhgv/k8s-upgrade-and-conformance-0cvexn to be deleted STEP: Waiting for cluster k8s-upgrade-and-conformance-0cvexn to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/framework/ginkgoextensions/output.go:35 @ 01/30/23 19:17:45.897 STEP: Deleting namespace used for hosting the "k8s-upgrade-and-conformance" test spec - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/common.go:51 @ 01/30/23 19:23:26.13 INFO: Deleting namespace k8s-upgrade-and-conformance-lsuhgv < Exit [AfterEach] Running the workload cluster upgrade spec [K8s-Upgrade] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.2/e2e/cluster_upgrade.go:242 @ 01/30/23 19:23:26.151 (7m45.757s) > Enter [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/30/23 19:23:26.151 Jan 30 19:23:26.151: INFO: FAILED! Jan 30 19:23:26.151: INFO: Cleaning up after "Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest" spec STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:216 @ 01/30/23 19:23:26.151 INFO: "Should create and upgrade a workload cluster and eventually run kubetest" started at Mon, 30 Jan 2023 19:23:31 UTC on Ginkgo node 7 of 10 and junit test report to file /logs/artifacts/test_e2e_junit.e2e_suite.1.xml < Exit [AfterEach] Running the Cluster API E2E tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:97 @ 01/30/23 19:23:31.1 (4.949s)
Filter through log files | View test history on testgrid
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [It] Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a Flatcar cluster [OPTIONAL] With Flatcar control-plane and worker nodes
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating a private cluster [OPTIONAL] Creates a public management cluster in a custom vnet
capz-e2e [It] Workload cluster creation Creating an AKS cluster [Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node