This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 0 succeeded
Started2022-04-16 04:19
Elapsed45m13s
Revisionrelease-1.2

Test Failures


capz-e2e Conformance Tests conformance-tests 34m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sConformance\sTests\sconformance\-tests$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:103
Timed out after 1200.001s.
Expected
    <int>: 0
to equal
    <int>: 2
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/framework/machinedeployment_helpers.go:121
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 18 Skipped Tests

Error lines from build-log.txt

... skipping 488 lines ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind151875425
INFO: Loading image: "localhost:5000/ci-e2e/cluster-api-azure-controller-amd64:20220416041918"
INFO: Loading image: "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2"
INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/cluster-api-controller:v1.1.2" to "/tmp/image-tar2576181188/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2"
INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-bootstrap-controller:v1.1.2" to "/tmp/image-tar1285563728/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2"
INFO: [WARNING] Unable to load image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" into the kind cluster "capz-e2e": error saving image "k8s.gcr.io/cluster-api/kubeadm-control-plane-controller:v1.1.2" to "/tmp/image-tar26590061/image.tar": unable to read image data: Error response from daemon: reference does not exist
STEP: Initializing the bootstrap cluster
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-6984cdc687-gqvdw, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
... skipping 11 lines ...
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:103
[BeforeEach] Conformance Tests
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:59
INFO: Cluster name is capz-conf-wa56q5
STEP: Creating namespace "capz-conf-wa56q5" for hosting the cluster
Apr 16 04:29:49.757: INFO: starting to create namespace for hosting the "capz-conf-wa56q5" test spec
2022/04/16 04:29:49 failed trying to get namespace (capz-conf-wa56q5):namespaces "capz-conf-wa56q5" not found
INFO: Creating namespace capz-conf-wa56q5
INFO: Creating event watcher for namespace "capz-conf-wa56q5"
[Measure] conformance-tests
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:103
INFO: Creating the workload cluster with name "capz-conf-wa56q5" using the "conformance-ci-artifacts" template (Kubernetes v1.24.0-beta.0.137+a750d8054a6cb3, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 39 lines ...
Apr 16 04:55:18.704: INFO: INFO: Collecting boot logs for AzureMachine capz-conf-wa56q5-control-plane-4llrr

Apr 16 04:55:20.983: INFO: INFO: Collecting logs for node capz-conf-wa56q5-md-0-87s8m in cluster capz-conf-wa56q5 in namespace capz-conf-wa56q5

Apr 16 04:55:26.903: INFO: INFO: Collecting boot logs for AzureMachine capz-conf-wa56q5-md-0-87s8m

Failed to get logs for machine capz-conf-wa56q5-md-0-596fb89b99-vqwgw, cluster capz-conf-wa56q5/capz-conf-wa56q5: dialing from control plane to target node at capz-conf-wa56q5-md-0-87s8m: ssh: rejected: connect failed (Temporary failure in name resolution)
Apr 16 04:55:27.298: INFO: INFO: Collecting logs for node capz-conf-wa56q5-md-0-kt8hb in cluster capz-conf-wa56q5 in namespace capz-conf-wa56q5

Apr 16 04:55:30.147: INFO: INFO: Collecting boot logs for AzureMachine capz-conf-wa56q5-md-0-kt8hb

Failed to get logs for machine capz-conf-wa56q5-md-0-596fb89b99-wlk2z, cluster capz-conf-wa56q5/capz-conf-wa56q5: [dialing from control plane to target node at capz-conf-wa56q5-md-0-kt8hb: ssh: rejected: connect failed (Temporary failure in name resolution), failed to get boot diagnostics data: compute.VirtualMachinesClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="NotFound" Message="The entity was not found in this Azure location." Target="vmName"]
STEP: Dumping workload cluster capz-conf-wa56q5/capz-conf-wa56q5 kube-system pod logs
STEP: Fetching kube-system pod logs took 1.140801175s
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-wa56q5-control-plane-4llrr, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-conf-wa56q5-control-plane-4llrr
STEP: Collecting events for Pod kube-system/calico-kube-controllers-5bfccc59bc-wnct9
STEP: Collecting events for Pod kube-system/kube-proxy-glq65
STEP: failed to find events of Pod "kube-apiserver-capz-conf-wa56q5-control-plane-4llrr"
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-wa56q5-control-plane-4llrr, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-wa56q5-control-plane-4llrr, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/metrics-server-7d674f87b8-r4jxc, container metrics-server
STEP: Collecting events for Pod kube-system/metrics-server-7d674f87b8-r4jxc
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-conf-wa56q5-control-plane-4llrr
STEP: failed to find events of Pod "kube-controller-manager-capz-conf-wa56q5-control-plane-4llrr"
STEP: Creating log watcher for controller kube-system/coredns-6d4b75cb6d-mmx78, container coredns
STEP: Dumping workload cluster capz-conf-wa56q5/capz-conf-wa56q5 Azure activity log
STEP: Creating log watcher for controller kube-system/kube-proxy-glq65, container kube-proxy
STEP: Collecting events for Pod kube-system/coredns-6d4b75cb6d-mmx78
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-conf-wa56q5-control-plane-4llrr
STEP: failed to find events of Pod "kube-scheduler-capz-conf-wa56q5-control-plane-4llrr"
STEP: Creating log watcher for controller kube-system/etcd-capz-conf-wa56q5-control-plane-4llrr, container etcd
STEP: Collecting events for Pod kube-system/etcd-capz-conf-wa56q5-control-plane-4llrr
STEP: Creating log watcher for controller kube-system/calico-node-srrjb, container calico-node
STEP: failed to find events of Pod "etcd-capz-conf-wa56q5-control-plane-4llrr"
STEP: Creating log watcher for controller kube-system/coredns-6d4b75cb6d-lngmh, container coredns
STEP: Collecting events for Pod kube-system/coredns-6d4b75cb6d-lngmh
STEP: Collecting events for Pod kube-system/calico-node-srrjb
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-5bfccc59bc-wnct9, container calico-kube-controllers
STEP: Got error while iterating over activity logs for resource group capz-conf-wa56q5: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000449603s
STEP: Dumping all the Cluster API resources in the "capz-conf-wa56q5" namespace
STEP: Deleting all clusters in the capz-conf-wa56q5 namespace
STEP: Deleting cluster capz-conf-wa56q5
INFO: Waiting for the Cluster capz-conf-wa56q5/capz-conf-wa56q5 to be deleted
STEP: Waiting for cluster capz-conf-wa56q5 to be deleted
STEP: Got error while streaming logs for pod kube-system/metrics-server-7d674f87b8-r4jxc, container metrics-server: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-5bfccc59bc-wnct9, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-conf-wa56q5-control-plane-4llrr, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-6d4b75cb6d-lngmh, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-srrjb, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-glq65, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-conf-wa56q5-control-plane-4llrr, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-conf-wa56q5-control-plane-4llrr, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-conf-wa56q5-control-plane-4llrr, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-6d4b75cb6d-mmx78, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "conformance-tests" test spec
INFO: Deleting namespace capz-conf-wa56q5
STEP: Checking if any resources are left over in Azure for spec "conformance-tests"
STEP: Redacting sensitive information from logs

• Failure [2049.112 seconds]
... skipping 62 lines ...

JUnit report was created: /logs/artifacts/junit.e2e_suite.1.xml


Summarizing 1 Failure:

[Fail] Conformance Tests [Measurement] conformance-tests 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.2/framework/machinedeployment_helpers.go:121

Ran 1 of 19 Specs in 2384.081 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 18 Skipped
--- FAIL: TestE2E (2384.09s)
FAIL

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 10 lines ...

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.5


Ginkgo ran 1 suite in 41m21.145900909s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[2]: *** [Makefile:608: test-e2e-run] Error 1
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[1]: *** [Makefile:624: test-e2e-local] Error 2
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:633: test-conformance] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...