Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 8 succeeded |
Started | |
Elapsed | 50m44s |
Revision | release-1.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\sRunning\sthe\sMachineDeployment\srollout\sspec\sShould\ssuccessfully\supgrade\sMachines\supon\schanges\sin\srelevant\sMachineDeployment\sfields$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/e2e/md_rollout.go:71 Timed out after 41.795s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000b28108>: { error: <*exec.ExitError | 0xc00043e000>{ ProcessState: { pid: 36346, status: 256, rusage: { Utime: {Sec: 0, Usec: 685684}, Stime: {Sec: 0, Usec: 332312}, Maxrss: 97208, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 12755, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25112, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 4682, Nivcsw: 1542, }, }, Stderr: nil, }, stack: [0x268dd00, 0x268e250, 0x281fe0c, 0x2cb9f93, 0x13c5565, 0x13c4a5c, 0x176e031, 0x176e399, 0x176e785, 0x176e12b, 0x2cb958c, 0x2e37908, 0x1749e51, 0x1749845, 0x17488bb, 0x174f169, 0x174eb52, 0x175b451, 0x175b176, 0x175a7c5, 0x175ce85, 0x176a6e9, 0x176a4fe, 0x31bd478, 0x141accb, 0x1352801], } exit status 1 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/framework/clusterctl/clusterctl_helpers.go:278from junit.e2e_suite.4.xml
INFO: "Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" started at Tue, 10 Jan 2023 17:08:30 UTC on Ginkgo node 4 of 10 �[1mSTEP�[0m: Creating a namespace for hosting the "md-rollout" test spec INFO: Creating namespace md-rollout-ukkb86 INFO: Creating event watcher for namespace "md-rollout-ukkb86" �[1mSTEP�[0m: Creating a workload cluster INFO: Creating the workload cluster with name "md-rollout-z5bn92" using the "(default)" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster md-rollout-z5bn92 --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validation.clusterresourceset.addons.cluster.x-k8s.io": failed to call webhook: Post "https://capi-webhook-service.capi-system.svc:443/validate-addons-cluster-x-k8s-io-v1beta1-clusterresourceset?timeout=10s": read tcp 172.17.0.2:40674->10.96.170.142:443: read: connection reset by peer Jan 10 17:09:14.997: INFO: FAILED! Jan 10 17:09:14.997: INFO: Cleaning up after "Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" spec �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" ran for 1m0s on Ginkgo node 4 of 10
Filter through log files
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests API Version Upgrade upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e Workload cluster creation Creating a private cluster [REQUIRED] Creates a public management cluster in a custom vnet
capz-e2e Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [K8s-Upgrade] Running the workload cluster upgrade tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 540 lines ... INFO: Creating event watcher for namespace "md-rollout-ukkb86" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "md-rollout-z5bn92" using the "(default)" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster md-rollout-z5bn92 --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Applying the cluster template yaml to the cluster Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validation.clusterresourceset.addons.cluster.x-k8s.io": failed to call webhook: Post "https://capi-webhook-service.capi-system.svc:443/validate-addons-cluster-x-k8s-io-v1beta1-clusterresourceset?timeout=10s": read tcp 172.17.0.2:40674->10.96.170.142:443: read: connection reset by peer Jan 10 17:09:14.997: INFO: FAILED! Jan 10 17:09:14.997: INFO: Cleaning up after "Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" spec [1mSTEP[0m: Redacting sensitive information from logs INFO: "Should successfully upgrade Machines upon changes in relevant MachineDeployment fields" ran for 1m0s on Ginkgo node 4 of 10 [91m[1m• Failure [60.277 seconds][0m ... skipping 2 lines ... Running the MachineDeployment rollout spec [90m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:121[0m [91m[1mShould successfully upgrade Machines upon changes in relevant MachineDeployment fields [It][0m [90m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/e2e/md_rollout.go:71[0m [91mTimed out after 41.795s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000b28108>: { error: <*exec.ExitError | 0xc00043e000>{ ProcessState: { pid: 36346, status: 256, rusage: { Utime: {Sec: 0, Usec: 685684}, Stime: {Sec: 0, Usec: 332312}, ... skipping 100 lines ... [1mSTEP[0m: Dumping workload cluster kcp-adoption-2bmts5/kcp-adoption-kvx5uc Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-fdzj6, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-fdzj6 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-xnkfh, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-kcp-adoption-kvx5uc-control-plane-0 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-hbgtt [1mSTEP[0m: failed to find events of Pod "kube-apiserver-kcp-adoption-kvx5uc-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-4w6wv, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-kcp-adoption-kvx5uc-control-plane-0, container kube-controller-manager [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-4w6wv [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-zdsjn, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-kcp-adoption-kvx5uc-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-kcp-adoption-kvx5uc-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-hbgtt, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-zdsjn [1mSTEP[0m: Collecting events for Pod kube-system/etcd-kcp-adoption-kvx5uc-control-plane-0 [1mSTEP[0m: failed to find events of Pod "etcd-kcp-adoption-kvx5uc-control-plane-0" [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-kcp-adoption-kvx5uc-control-plane-0 [1mSTEP[0m: failed to find events of Pod "kube-scheduler-kcp-adoption-kvx5uc-control-plane-0" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-kcp-adoption-kvx5uc-control-plane-0, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-kcp-adoption-kvx5uc-control-plane-0, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-kcp-adoption-kvx5uc-control-plane-0, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-xnkfh [1mSTEP[0m: Fetching activity logs took 2.316571409s [1mSTEP[0m: Dumping all the Cluster API resources in the "kcp-adoption-2bmts5" namespace ... skipping 122 lines ... [1mShould pivot the bootstrap cluster to a self-hosted cluster[0m [37m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_selfhosted.go:107[0m INFO: "Should pivot the bootstrap cluster to a self-hosted cluster" started at Tue, 10 Jan 2023 17:08:30 UTC on Ginkgo node 5 of 10 [1mSTEP[0m: Creating namespace "self-hosted" for hosting the cluster Jan 10 17:08:30.232: INFO: starting to create namespace for hosting the "self-hosted" test spec 2023/01/10 17:08:30 failed trying to get namespace (self-hosted):namespaces "self-hosted" not found INFO: Creating namespace self-hosted INFO: Creating event watcher for namespace "self-hosted" [1mSTEP[0m: Creating a workload cluster INFO: Creating the workload cluster with name "self-hosted-0g8yf8" using the "management" template (Kubernetes v1.23.15, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster self-hosted-0g8yf8 --infrastructure (default) --kubernetes-version v1.23.15 --control-plane-machine-count 1 --worker-machine-count 1 --flavor management ... skipping 62 lines ... [1mSTEP[0m: Fetching kube-system pod logs took 716.47173ms [1mSTEP[0m: Dumping workload cluster self-hosted/self-hosted-0g8yf8 Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-self-hosted-0g8yf8-control-plane-m4bst, container etcd [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-self-hosted-0g8yf8-control-plane-m4bst [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-hkk5p, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-self-hosted-0g8yf8-control-plane-m4bst, container kube-apiserver [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-self-hosted-0g8yf8-control-plane-m4bst" [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-self-hosted-0g8yf8-control-plane-m4bst [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-pxhhr, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/etcd-self-hosted-0g8yf8-control-plane-m4bst [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-pxhhr [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-zdzw7, container coredns [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-8fgdm, container coredns [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-hkk5p [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-self-hosted-0g8yf8-control-plane-m4bst, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-cbrqt [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-self-hosted-0g8yf8-control-plane-m4bst [1mSTEP[0m: failed to find events of Pod "kube-scheduler-self-hosted-0g8yf8-control-plane-m4bst" [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-zdzw7 [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-t75l4 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-5xpsj, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-t75l4, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-5xpsj [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-self-hosted-0g8yf8-control-plane-m4bst, container kube-controller-manager [1mSTEP[0m: failed to find events of Pod "kube-apiserver-self-hosted-0g8yf8-control-plane-m4bst" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-cbrqt, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-8fgdm [1mSTEP[0m: Fetching activity logs took 1.81424677s Jan 10 17:23:31.024: INFO: Dumping all the Cluster API resources in the "self-hosted" namespace Jan 10 17:23:31.676: INFO: Deleting all clusters in the self-hosted namespace [1mSTEP[0m: Deleting cluster self-hosted-0g8yf8 INFO: Waiting for the Cluster self-hosted/self-hosted-0g8yf8 to be deleted [1mSTEP[0m: Waiting for cluster self-hosted-0g8yf8 to be deleted INFO: Got error while streaming logs for pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-6c76c59d6b-pp57s, container manager: http2: client connection lost Jan 10 17:30:22.077: INFO: Deleting namespace used for hosting the "self-hosted" test spec INFO: Deleting namespace self-hosted Jan 10 17:30:22.105: INFO: Checking if any resources are left over in Azure for spec "self-hosted" [1mSTEP[0m: Redacting sensitive information from logs Jan 10 17:30:51.806: INFO: Cleaning up after "Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster" spec [1mSTEP[0m: Redacting sensitive information from logs ... skipping 232 lines ... [1mSTEP[0m: Collecting events for Pod kube-system/etcd-node-drain-wuzaet-control-plane-jmdgg [1mSTEP[0m: Collecting events for Pod kube-system/etcd-node-drain-wuzaet-control-plane-sl5fk [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-node-drain-wuzaet-control-plane-jmdgg [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-wuzaet-control-plane-sl5fk, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-node-drain-wuzaet-control-plane-jmdgg, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-node-drain-wuzaet-control-plane-sl5fk [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-pnzzs, container calico-node: pods "node-drain-wuzaet-control-plane-jmdgg" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/etcd-node-drain-wuzaet-control-plane-jmdgg, container etcd: pods "node-drain-wuzaet-control-plane-jmdgg" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-ljs77, container kube-proxy: pods "node-drain-wuzaet-control-plane-jmdgg" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-controller-manager-node-drain-wuzaet-control-plane-jmdgg, container kube-controller-manager: pods "node-drain-wuzaet-control-plane-jmdgg" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-scheduler-node-drain-wuzaet-control-plane-jmdgg, container kube-scheduler: pods "node-drain-wuzaet-control-plane-jmdgg" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-apiserver-node-drain-wuzaet-control-plane-jmdgg, container kube-apiserver: pods "node-drain-wuzaet-control-plane-jmdgg" not found [1mSTEP[0m: Fetching activity logs took 3.48706202s [1mSTEP[0m: Dumping all the Cluster API resources in the "node-drain-95f5rx" namespace [1mSTEP[0m: Deleting cluster node-drain-95f5rx/node-drain-wuzaet [1mSTEP[0m: Deleting cluster node-drain-wuzaet INFO: Waiting for the Cluster node-drain-95f5rx/node-drain-wuzaet to be deleted [1mSTEP[0m: Waiting for cluster node-drain-wuzaet to be deleted ... skipping 72 lines ... Jan 10 17:29:52.077: INFO: Collecting logs for Windows node quick-sta-64h4p in cluster quick-start-axe4mu in namespace quick-start-62jja2 Jan 10 17:32:30.079: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-64h4p to /logs/artifacts/clusters/quick-start-axe4mu/machines/quick-start-axe4mu-md-win-5f8957779-9fss7/crashdumps.tar Jan 10 17:32:33.375: INFO: Collecting boot logs for AzureMachine quick-start-axe4mu-md-win-64h4p Failed to get logs for machine quick-start-axe4mu-md-win-5f8957779-9fss7, cluster quick-start-62jja2/quick-start-axe4mu: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 10 17:32:34.626: INFO: Collecting logs for Windows node quick-sta-92ksh in cluster quick-start-axe4mu in namespace quick-start-62jja2 Jan 10 17:35:09.843: INFO: Attempting to copy file /c:/crashdumps.tar on node quick-sta-92ksh to /logs/artifacts/clusters/quick-start-axe4mu/machines/quick-start-axe4mu-md-win-5f8957779-fvz7f/crashdumps.tar Jan 10 17:35:13.164: INFO: Collecting boot logs for AzureMachine quick-start-axe4mu-md-win-92ksh Failed to get logs for machine quick-start-axe4mu-md-win-5f8957779-fvz7f, cluster quick-start-62jja2/quick-start-axe4mu: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster quick-start-62jja2/quick-start-axe4mu kube-system pod logs [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-2gczm, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-quick-start-axe4mu-control-plane-zxwqc [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-2wh9b, container calico-kube-controllers [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-hlpx5, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-2wh9b [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-quick-start-axe4mu-control-plane-zxwqc [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-quick-start-axe4mu-control-plane-zxwqc, container etcd [1mSTEP[0m: failed to find events of Pod "kube-scheduler-quick-start-axe4mu-control-plane-zxwqc" [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-b4bsn [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-b4bsn, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-quick-start-axe4mu-control-plane-zxwqc" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-quick-start-axe4mu-control-plane-zxwqc [1mSTEP[0m: failed to find events of Pod "etcd-quick-start-axe4mu-control-plane-zxwqc" [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-quick-start-axe4mu-control-plane-zxwqc, container kube-apiserver [1mSTEP[0m: Fetching kube-system pod logs took 1.180258805s [1mSTEP[0m: Dumping workload cluster quick-start-62jja2/quick-start-axe4mu Azure activity log [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-zfnsc, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-zfnsc [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-9vgg6 ... skipping 4 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-quick-start-axe4mu-control-plane-zxwqc, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-9vgg6, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-k7kdl [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-8qtnt, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-2856h [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-8qtnt [1mSTEP[0m: failed to find events of Pod "kube-apiserver-quick-start-axe4mu-control-plane-zxwqc" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-b5rt9, container calico-node-startup [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-hkckk, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/containerd-logger-hkckk [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-quick-start-axe4mu-control-plane-zxwqc, container kube-scheduler [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-k7kdl, container csi-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-b5rt9, container calico-node-felix ... skipping 94 lines ... Jan 10 17:32:27.808: INFO: Collecting logs for Windows node md-scale-n7887 in cluster md-scale-e2sj3k in namespace md-scale-jh5t3p Jan 10 17:35:07.194: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-n7887 to /logs/artifacts/clusters/md-scale-e2sj3k/machines/md-scale-e2sj3k-md-win-869555b997-blx95/crashdumps.tar Jan 10 17:35:10.697: INFO: Collecting boot logs for AzureMachine md-scale-e2sj3k-md-win-n7887 Failed to get logs for machine md-scale-e2sj3k-md-win-869555b997-blx95, cluster md-scale-jh5t3p/md-scale-e2sj3k: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] Jan 10 17:35:12.095: INFO: Collecting logs for Windows node md-scale-kfqxf in cluster md-scale-e2sj3k in namespace md-scale-jh5t3p Jan 10 17:37:53.326: INFO: Attempting to copy file /c:/crashdumps.tar on node md-scale-kfqxf to /logs/artifacts/clusters/md-scale-e2sj3k/machines/md-scale-e2sj3k-md-win-869555b997-pfwrw/crashdumps.tar Jan 10 17:37:56.854: INFO: Collecting boot logs for AzureMachine md-scale-e2sj3k-md-win-kfqxf Failed to get logs for machine md-scale-e2sj3k-md-win-869555b997-pfwrw, cluster md-scale-jh5t3p/md-scale-e2sj3k: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1] [1mSTEP[0m: Dumping workload cluster md-scale-jh5t3p/md-scale-e2sj3k kube-system pod logs [1mSTEP[0m: Fetching kube-system pod logs took 1.144705292s [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-jkpp6 [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-md-scale-e2sj3k-control-plane-5w4b6, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-md-scale-e2sj3k-control-plane-5w4b6, container kube-apiserver [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-md-scale-e2sj3k-control-plane-5w4b6 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-fssgs, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-jfs6h, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-fssgs [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-jfs6h [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-qp45m, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-apiserver-md-scale-e2sj3k-control-plane-5w4b6" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-lvrnd, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-jfs6h, container calico-node-felix [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-q8nvd, container containerd-logger [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-hpz7n, container calico-node [1mSTEP[0m: Creating log watcher for controller kube-system/containerd-logger-pqq92, container containerd-logger [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-windows-lvrnd ... skipping 9 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-jnh4l, container csi-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-wqkd9 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-md-scale-e2sj3k-control-plane-5w4b6, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/csi-proxy-jnh4l [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-md-scale-e2sj3k-control-plane-5w4b6 [1mSTEP[0m: Creating log watcher for controller kube-system/csi-proxy-t4x89, container csi-proxy [1mSTEP[0m: failed to find events of Pod "kube-scheduler-md-scale-e2sj3k-control-plane-5w4b6" [1mSTEP[0m: Collecting events for Pod kube-system/etcd-md-scale-e2sj3k-control-plane-5w4b6 [1mSTEP[0m: failed to find events of Pod "etcd-md-scale-e2sj3k-control-plane-5w4b6" [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-g54w4 [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-5mvwz, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-nnd7d [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-5mvwz [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-nnd7d, container kube-proxy [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-g54w4, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-hpz7n [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-lvrnd, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-md-scale-e2sj3k-control-plane-5w4b6 [1mSTEP[0m: Dumping workload cluster md-scale-jh5t3p/md-scale-e2sj3k Azure activity log [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-md-scale-e2sj3k-control-plane-5w4b6" [1mSTEP[0m: Fetching activity logs took 7.732533226s [1mSTEP[0m: Dumping all the Cluster API resources in the "md-scale-jh5t3p" namespace [1mSTEP[0m: Deleting cluster md-scale-jh5t3p/md-scale-e2sj3k [1mSTEP[0m: Deleting cluster md-scale-e2sj3k INFO: Waiting for the Cluster md-scale-jh5t3p/md-scale-e2sj3k to be deleted [1mSTEP[0m: Waiting for cluster md-scale-e2sj3k to be deleted ... skipping 85 lines ... [1mSTEP[0m: Creating log watcher for controller kube-system/kube-scheduler-machine-pool-vx1v1b-control-plane-trhg2, container kube-scheduler [1mSTEP[0m: Collecting events for Pod kube-system/kube-apiserver-machine-pool-vx1v1b-control-plane-trhg2 [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-q28vl [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-qqwtl, container calico-node [1mSTEP[0m: Collecting events for Pod kube-system/calico-node-qqwtl [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-windows-mjdbw, container kube-proxy [1mSTEP[0m: failed to find events of Pod "etcd-machine-pool-vx1v1b-control-plane-trhg2" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-kube-controllers-85f479877b-qk4l5, container calico-kube-controllers [1mSTEP[0m: Collecting events for Pod kube-system/coredns-bd6b6df9f-gbqhq [1mSTEP[0m: Creating log watcher for controller kube-system/kube-controller-manager-machine-pool-vx1v1b-control-plane-trhg2, container kube-controller-manager [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-92wch, container kube-proxy [1mSTEP[0m: Collecting events for Pod kube-system/kube-controller-manager-machine-pool-vx1v1b-control-plane-trhg2 [1mSTEP[0m: Collecting events for Pod kube-system/kube-scheduler-machine-pool-vx1v1b-control-plane-trhg2 [1mSTEP[0m: Creating log watcher for controller kube-system/kube-apiserver-machine-pool-vx1v1b-control-plane-trhg2, container kube-apiserver [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-9xxmn, container calico-node-startup [1mSTEP[0m: Collecting events for Pod kube-system/calico-kube-controllers-85f479877b-qk4l5 [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-windows-mjdbw [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-92wch [1mSTEP[0m: Creating log watcher for controller kube-system/kube-proxy-gdk7g, container kube-proxy [1mSTEP[0m: failed to find events of Pod "kube-apiserver-machine-pool-vx1v1b-control-plane-trhg2" [1mSTEP[0m: Creating log watcher for controller kube-system/calico-node-windows-9xxmn, container calico-node-felix [1mSTEP[0m: Collecting events for Pod kube-system/kube-proxy-gdk7g [1mSTEP[0m: Creating log watcher for controller kube-system/etcd-machine-pool-vx1v1b-control-plane-trhg2, container etcd [1mSTEP[0m: Creating log watcher for controller kube-system/coredns-bd6b6df9f-lm5sj, container coredns [1mSTEP[0m: failed to find events of Pod "kube-scheduler-machine-pool-vx1v1b-control-plane-trhg2" [1mSTEP[0m: failed to find events of Pod "kube-controller-manager-machine-pool-vx1v1b-control-plane-trhg2" [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-q28vl, container calico-node: pods "machine-pool-vx1v1b-mp-0000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-windows-mjdbw, container kube-proxy: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-9xxmn, container calico-node-felix: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/calico-node-windows-9xxmn, container calico-node-startup: pods "win-p-win000002" not found [1mSTEP[0m: Error starting logs stream for pod kube-system/kube-proxy-gdk7g, container kube-proxy: pods "machine-pool-vx1v1b-mp-0000002" not found [1mSTEP[0m: Fetching activity logs took 5.091085884s [1mSTEP[0m: Dumping all the Cluster API resources in the "machine-pool-3evxvn" namespace [1mSTEP[0m: Deleting cluster machine-pool-3evxvn/machine-pool-vx1v1b [1mSTEP[0m: Deleting cluster machine-pool-vx1v1b INFO: Waiting for the Cluster machine-pool-3evxvn/machine-pool-vx1v1b to be deleted [1mSTEP[0m: Waiting for cluster machine-pool-vx1v1b to be deleted ... skipping 15 lines ... [1mSTEP[0m: Tearing down the management cluster [91m[1mSummarizing 1 Failure:[0m [91m[1m[Fail] [0m[90mRunning the Cluster API E2E tests [0m[0mRunning the MachineDeployment rollout spec [0m[91m[1m[It] Should successfully upgrade Machines upon changes in relevant MachineDeployment fields [0m [37m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.1/framework/clusterctl/clusterctl_helpers.go:278[0m [1m[91mRan 9 of 26 Specs in 2528.968 seconds[0m [1m[91mFAIL![0m -- [32m[1m8 Passed[0m | [91m[1m1 Failed[0m | [33m[1m0 Pending[0m | [36m[1m17 Skipped[0m Ginkgo ran 1 suite in 44m41.180823599s Test Suite Failed [38;5;228mGinkgo 2.0 is coming soon![0m [38;5;228m==========================[0m [1m[38;5;10mGinkgo 2.0[0m is under active development and will introduce several new features, improvements, and a small handful of breaking changes. A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021. [1mPlease give the RC a try and send us feedback![0m - To learn more, view the migration guide at [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md[0m - For instructions on using the Release Candidate visit [38;5;14m[4mhttps://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta[0m - To comment, chime in at [38;5;14m[4mhttps://github.com/onsi/ginkgo/issues/711[0m To [1m[38;5;204msilence this notice[0m, set the environment variable: [1mACK_GINKGO_RC=true[0m Alternatively you can: [1mtouch $HOME/.ack-ginkgo-rc[0m make[1]: *** [Makefile:654: test-e2e-run] Error 1 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:662: test-e2e] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...