This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmonianshouhou: [release-1.3] Help windows cloud-node-manager to be better provisioned
Resultsuccess
Tests 0 failed / 1 succeeded
Started2022-09-08 13:18
Elapsed53m38s
Revision
Refs 2627
uploadercrier

No Test Failures!


Show 1 Passed Tests

Show 22 Skipped Tests

Error lines from build-log.txt

... skipping 510 lines ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind4264252688
INFO: Loading image: "capzci.azurecr.io/cluster-api-azure-controller-amd64:20220908131859"
INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4" to "/tmp/image-tar960199310/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4" to "/tmp/image-tar991190535/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4" to "/tmp/image-tar157934595/image.tar": unable to read image data: Error response from daemon: reference does not exist
STEP: Initializing the bootstrap cluster
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-8447dbccc5-f9nhp, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
... skipping 20 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:507

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Thu, 08 Sep 2022 13:28:50 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-7iet1b" for hosting the cluster
Sep  8 13:28:50.395: INFO: starting to create namespace for hosting the "capz-e2e-7iet1b" test spec
2022/09/08 13:28:50 failed trying to get namespace (capz-e2e-7iet1b):namespaces "capz-e2e-7iet1b" not found
INFO: Creating namespace capz-e2e-7iet1b
INFO: Creating event watcher for namespace "capz-e2e-7iet1b"
Sep  8 13:28:50.427: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-7iet1b-win-ha
INFO: Creating the workload cluster with name "capz-e2e-7iet1b-win-ha" using the "windows" template (Kubernetes v1.22.13, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 94 lines ...
STEP: waiting for job default/curl-to-elb-job200qupgubek to be complete
Sep  8 13:56:36.341: INFO: waiting for job default/curl-to-elb-job200qupgubek to be complete
Sep  8 13:56:46.558: INFO: job default/curl-to-elb-job200qupgubek is complete, took 10.217422017s
STEP: connecting directly to the external LB service
Sep  8 13:56:46.558: INFO: starting attempts to connect directly to the external LB service
2022/09/08 13:56:46 [DEBUG] GET http://52.158.30.68
2022/09/08 13:57:16 [ERR] GET http://52.158.30.68 request failed: Get "http://52.158.30.68": dial tcp 52.158.30.68:80: i/o timeout
2022/09/08 13:57:16 [DEBUG] GET http://52.158.30.68: retrying in 1s (4 left)
2022/09/08 13:57:47 [ERR] GET http://52.158.30.68 request failed: Get "http://52.158.30.68": dial tcp 52.158.30.68:80: i/o timeout
2022/09/08 13:57:47 [DEBUG] GET http://52.158.30.68: retrying in 2s (3 left)
Sep  8 13:57:49.765: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Sep  8 13:57:49.765: INFO: starting to delete external LB service web-windowstbe8e3-elb
Sep  8 13:57:49.934: INFO: waiting for the external LB service to be deleted: web-windowstbe8e3-elb
Sep  8 13:58:36.642: INFO: starting to delete deployment web-windowstbe8e3
... skipping 17 lines ...
Sep  8 13:59:32.965: INFO: Collecting boot logs for AzureMachine capz-e2e-7iet1b-win-ha-md-0-cb2wn

Sep  8 13:59:33.409: INFO: Collecting logs for Windows node capz-e2e-trfqh in cluster capz-e2e-7iet1b-win-ha in namespace capz-e2e-7iet1b

Sep  8 14:01:10.262: INFO: Collecting boot logs for AzureMachine capz-e2e-7iet1b-win-ha-md-win-trfqh

Failed to get logs for machine capz-e2e-7iet1b-win-ha-md-win-db79888bb-vvvvn, cluster capz-e2e-7iet1b/capz-e2e-7iet1b-win-ha: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "ctr.exe -n k8s.io containers list": Process exited with status 1, running command "ctr.exe -n k8s.io tasks list": Process exited with status 1]
Sep  8 14:01:11.458: INFO: Collecting logs for Windows node capz-e2e-wksgs in cluster capz-e2e-7iet1b-win-ha in namespace capz-e2e-7iet1b

Sep  8 14:02:47.772: INFO: Collecting boot logs for AzureMachine capz-e2e-7iet1b-win-ha-md-win-wksgs

Failed to get logs for machine capz-e2e-7iet1b-win-ha-md-win-db79888bb-zr272, cluster capz-e2e-7iet1b/capz-e2e-7iet1b-win-ha: [running command "Get-Content "C:\\cni.log"": Process exited with status 1, running command "ctr.exe -n k8s.io containers list": Process exited with status 1, running command "ctr.exe -n k8s.io tasks list": Process exited with status 1]
STEP: Dumping workload cluster capz-e2e-7iet1b/capz-e2e-7iet1b-win-ha kube-system pod logs
STEP: Fetching kube-system pod logs took 1.030594365s
STEP: Dumping workload cluster capz-e2e-7iet1b/capz-e2e-7iet1b-win-ha Azure activity log
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-h99vm, container coredns
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-amd64-kwnc6, container kube-flannel
STEP: Collecting events for Pod kube-system/kube-flannel-ds-amd64-kwnc6
... skipping 49 lines ...
STEP: Fetching activity logs took 3.632572894s
STEP: Dumping all the Cluster API resources in the "capz-e2e-7iet1b" namespace
STEP: Deleting all clusters in the capz-e2e-7iet1b namespace
STEP: Deleting cluster capz-e2e-7iet1b-win-ha
INFO: Waiting for the Cluster capz-e2e-7iet1b/capz-e2e-7iet1b-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-7iet1b-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-7iet1b-win-ha-control-plane-ntrd2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-7iet1b-win-ha-control-plane-njqjv, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-7iet1b-win-ha-control-plane-njqjv, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-ptd6q, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-7iet1b-win-ha-control-plane-ntrd2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-h99vm, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-c64pb, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-nlm65, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-h9c9j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-g2psj, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-7iet1b-win-ha-control-plane-bgnz9, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-b8zsm, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-tvmrb, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-vmnxq, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-5wdlh, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-7iet1b-win-ha-control-plane-ntrd2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-7iet1b-win-ha-control-plane-bgnz9, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-kwnc6, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-572qf, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-7iet1b-win-ha-control-plane-ntrd2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-7iet1b-win-ha-control-plane-bgnz9, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-7iet1b-win-ha-control-plane-bgnz9, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tn49q, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-7iet1b-win-ha-control-plane-njqjv, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-7iet1b-win-ha-control-plane-njqjv, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-hr46x, container kube-proxy: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-7iet1b
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 42m10s on Ginkgo node 1 of 3

... skipping 7 lines ...
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:507
------------------------------
STEP: Tearing down the management cluster


Ran 1 of 23 Specs in 2812.352 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 22 Skipped


Ginkgo ran 1 suite in 48m40.793093471s
Test Suite Passed
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
================ REDACTING LOGS ================
... skipping 10 lines ...