Recent runs || View in Spyglass
Result | FAILURE |
Tests | 2 failed / 7 succeeded |
Started | |
Elapsed | 1h57m |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sprivate\scluster\sCreates\sa\spublic\smanagement\scluster\sin\sthe\ssame\svnet$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:141 Expected success, but got an error: <*errors.withStack | 0xc000b7a498>: { error: <*exec.ExitError | 0xc002044180>{ ProcessState: { pid: 28297, status: 256, rusage: { Utime: {Sec: 0, Usec: 512883}, Stime: {Sec: 0, Usec: 160508}, Maxrss: 101596, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 11710, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25296, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 4056, Nivcsw: 458, }, }, Stderr: nil, }, stack: [0x1819e9e, 0x181a565, 0x19839b7, 0x1b3c528, 0x1c9d968, 0x1cbebcc, 0x813b23, 0x82154a, 0x1cbf2db, 0x7fc603, 0x7fc21c, 0x7fb547, 0x8024ef, 0x801b92, 0x811491, 0x810fa7, 0x810797, 0x812ea6, 0x820bd8, 0x820916, 0x1cae6ba, 0x529ce5, 0x474781], } exit status 1 /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/clusterctl/clusterctl_helpers.go:272from junit.e2e_suite.1.xml
INFO: "Creates a public management cluster in the same vnet" started at Mon, 11 Apr 2022 19:39:07 UTC on Ginkgo node 1 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-4twoay" for hosting the cluster Apr 11 19:39:07.221: INFO: starting to create namespace for hosting the "capz-e2e-4twoay" test spec 2022/04/11 19:39:07 failed trying to get namespace (capz-e2e-4twoay):namespaces "capz-e2e-4twoay" not found INFO: Creating namespace capz-e2e-4twoay INFO: Creating event watcher for namespace "capz-e2e-4twoay" Apr 11 19:39:07.269: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-4twoay-public-custom-vnet �[1mSTEP�[0m: creating Azure clients with the workload cluster's subscription �[1mSTEP�[0m: creating a resource group �[1mSTEP�[0m: creating a network security group �[1mSTEP�[0m: creating a node security group �[1mSTEP�[0m: creating a node routetable �[1mSTEP�[0m: creating a virtual network INFO: Creating the workload cluster with name "capz-e2e-4twoay-public-custom-vnet" using the "custom-vnet" template (Kubernetes v1.22.1, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-4twoay-public-custom-vnet --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 1 --flavor custom-vnet INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-4twoay-public-custom-vnet created azurecluster.infrastructure.cluster.x-k8s.io/capz-e2e-4twoay-public-custom-vnet created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capz-e2e-4twoay-public-custom-vnet-control-plane created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-4twoay-public-custom-vnet-control-plane created machinedeployment.cluster.x-k8s.io/capz-e2e-4twoay-public-custom-vnet-md-0 created azuremachinetemplate.infrastructure.cluster.x-k8s.io/capz-e2e-4twoay-public-custom-vnet-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capz-e2e-4twoay-public-custom-vnet-md-0 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created machinehealthcheck.cluster.x-k8s.io/capz-e2e-4twoay-public-custom-vnet-mhc-0 created clusterresourceset.addons.cluster.x-k8s.io/capz-e2e-4twoay-public-custom-vnet-calico created configmap/cni-capz-e2e-4twoay-public-custom-vnet-calico created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-e2e-4twoay/capz-e2e-4twoay-public-custom-vnet-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-e2e-4twoay/capz-e2e-4twoay-public-custom-vnet-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-4twoay-public-custom-vnet-control-plane-f8527 �[1mSTEP�[0m: checking that time synchronization is healthy on capz-e2e-4twoay-public-custom-vnet-md-0-mffp2 �[1mSTEP�[0m: creating a Kubernetes client to the workload cluster �[1mSTEP�[0m: Creating a namespace for hosting the azure-private-cluster test spec Apr 11 19:43:19.126: INFO: starting to create namespace for hosting the azure-private-cluster test spec INFO: Creating namespace capz-e2e-4twoay INFO: Creating event watcher for namespace "capz-e2e-4twoay" �[1mSTEP�[0m: Initializing the workload cluster INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure INFO: Waiting for provider controllers to be running �[1mSTEP�[0m: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-75467796c5-fd82n, container manager �[1mSTEP�[0m: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-688b75d88d-9g2mg, container manager �[1mSTEP�[0m: Waiting for deployment capi-system/capi-controller-manager to be available INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-58757dd9b4-nw2tk, container manager �[1mSTEP�[0m: Waiting for deployment capz-system/capz-controller-manager to be available INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-6fbb8545bd-9hc4f, container manager �[1mSTEP�[0m: Ensure public API server is stable before creating private cluster �[1mSTEP�[0m: Creating a private workload cluster INFO: Creating the workload cluster with name "capz-e2e-yyr48v-private" using the "private" template (Kubernetes v1.22.1, 3 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-yyr48v-private --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 3 --worker-machine-count 1 --flavor private INFO: Applying the cluster template yaml to the cluster Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azurecluster.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.azuremachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource �[1mSTEP�[0m: Dumping logs from the "capz-e2e-4twoay-public-custom-vnet" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-4twoay/capz-e2e-4twoay-public-custom-vnet logs Apr 11 19:44:58.990: INFO: INFO: Collecting logs for node capz-e2e-4twoay-public-custom-vnet-control-plane-f8527 in cluster capz-e2e-4twoay-public-custom-vnet in namespace capz-e2e-4twoay Apr 11 19:45:07.271: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-4twoay-public-custom-vnet-control-plane-f8527 Apr 11 19:45:07.893: INFO: INFO: Collecting logs for node capz-e2e-4twoay-public-custom-vnet-md-0-mffp2 in cluster capz-e2e-4twoay-public-custom-vnet in namespace capz-e2e-4twoay Apr 11 19:45:16.051: INFO: INFO: Collecting boot logs for AzureMachine capz-e2e-4twoay-public-custom-vnet-md-0-mffp2 �[1mSTEP�[0m: Dumping workload cluster capz-e2e-4twoay/capz-e2e-4twoay-public-custom-vnet kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 114.41248ms �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-zbg76, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-kube-controllers-846b5f484d-6lsbt, container calico-kube-controllers �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-ntbd2, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-rtfcg, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-78fcd69978-6w27g, container coredns �[1mSTEP�[0m: Dumping workload cluster capz-e2e-4twoay/capz-e2e-4twoay-public-custom-vnet Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-4twoay-public-custom-vnet-control-plane-f8527, container kube-scheduler �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-4twoay-public-custom-vnet-control-plane-f8527, container kube-controller-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/calico-node-c5gt5, container calico-node �[1mSTEP�[0m: Creating log watcher for controller kube-system/etcd-capz-e2e-4twoay-public-custom-vnet-control-plane-f8527, container etcd �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-4twoay-public-custom-vnet-control-plane-f8527, container kube-apiserver �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-56rrz, container kube-proxy �[1mSTEP�[0m: Fetching activity logs took 555.357746ms �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-4twoay" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-4twoay namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-4twoay-public-custom-vnet INFO: Waiting for the Cluster capz-e2e-4twoay/capz-e2e-4twoay-public-custom-vnet to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-4twoay-public-custom-vnet to be deleted W0411 19:50:16.996149 24396 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0411 19:50:48.249389 24396 trace.go:205] Trace[452960670]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Apr-2022 19:50:18.247) (total time: 30001ms): Trace[452960670]: [30.001692028s] [30.001692028s] END E0411 19:50:48.249469 24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4twoay/events?resourceVersion=2299": dial tcp 20.236.106.5:6443: i/o timeout I0411 19:51:21.307953 24396 trace.go:205] Trace[1619803072]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Apr-2022 19:50:51.306) (total time: 30001ms): Trace[1619803072]: [30.00110036s] [30.00110036s] END E0411 19:51:21.308015 24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4twoay/events?resourceVersion=2299": dial tcp 20.236.106.5:6443: i/o timeout I0411 19:51:55.662411 24396 trace.go:205] Trace[1575292692]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Apr-2022 19:51:25.660) (total time: 30001ms): Trace[1575292692]: [30.001499975s] [30.001499975s] END E0411 19:51:55.662475 24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4twoay/events?resourceVersion=2299": dial tcp 20.236.106.5:6443: i/o timeout I0411 19:52:35.948236 24396 trace.go:205] Trace[1121938322]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Apr-2022 19:52:05.946) (total time: 30001ms): Trace[1121938322]: [30.001353847s] [30.001353847s] END E0411 19:52:35.948313 24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4twoay/events?resourceVersion=2299": dial tcp 20.236.106.5:6443: i/o timeout I0411 19:53:27.508178 24396 trace.go:205] Trace[1196048340]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Apr-2022 19:52:57.507) (total time: 30000ms): Trace[1196048340]: [30.000753881s] [30.000753881s] END E0411 19:53:27.508269 24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4twoay/events?resourceVersion=2299": dial tcp 20.236.106.5:6443: i/o timeout I0411 19:54:32.583620 24396 trace.go:205] Trace[2109871343]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167 (11-Apr-2022 19:54:02.582) (total time: 30001ms): Trace[2109871343]: [30.001065295s] [30.001065295s] END E0411 19:54:32.583687 24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4twoay/events?resourceVersion=2299": dial tcp 20.236.106.5:6443: i/o timeout E0411 19:55:23.925377 24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4twoay/events?resourceVersion=2299": dial tcp: lookup capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host �[1mSTEP�[0m: Deleting namespace used for hosting the "create-workload-cluster" test spec INFO: Deleting namespace capz-e2e-4twoay �[1mSTEP�[0m: Running additional cleanup for the "create-workload-cluster" test spec Apr 11 19:55:28.179: INFO: deleting an existing virtual network "custom-vnet" Apr 11 19:55:38.601: INFO: deleting an existing route table "node-routetable" Apr 11 19:55:40.850: INFO: deleting an existing network security group "node-nsg" Apr 11 19:55:51.089: INFO: deleting an existing network security group "control-plane-nsg" Apr 11 19:56:01.347: INFO: verifying the existing resource group "capz-e2e-4twoay-public-custom-vnet" is empty Apr 11 19:56:01.390: INFO: deleting the existing resource group "capz-e2e-4twoay-public-custom-vnet" E0411 19:56:02.454367 24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4twoay/events?resourceVersion=2299": dial tcp: lookup capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host E0411 19:57:01.127497 24396 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.4/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-4twoay/events?resourceVersion=2299": dial tcp: lookup capz-e2e-4twoay-public-custom-vnet-5c9d1387.northcentralus.cloudapp.azure.com on 10.63.240.10:53: no such host �[1mSTEP�[0m: Checking if any resources are left over in Azure for spec "create-workload-cluster" �[1mSTEP�[0m: Redacting sensitive information from logs INFO: "Creates a public management cluster in the same vnet" ran for 18m26s on Ginkgo node 1 of 3
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\san\sAKS\scluster\swith\sa\ssingle\scontrol\splane\snode\sand\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:115 Timed out after 1800.004s. Expected <bool>: false to be true /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v0.4.8-0.20220215165403-0234afe87ffe/framework/cluster_helpers.go:165from junit.e2e_suite.3.xml
INFO: "with a single control plane node and 1 node" ran for 24m41s on Ginkgo node 3 of 3 �[1mSTEP�[0m: Creating namespace "capz-e2e-n8mc8a" for hosting the cluster Apr 11 20:23:55.034: INFO: starting to create namespace for hosting the "capz-e2e-n8mc8a" test spec 2022/04/11 20:23:55 failed trying to get namespace (capz-e2e-n8mc8a):namespaces "capz-e2e-n8mc8a" not found INFO: Creating namespace capz-e2e-n8mc8a INFO: Creating event watcher for namespace "capz-e2e-n8mc8a" Apr 11 20:23:55.091: INFO: Creating cluster identity secret %!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-n8mc8a-aks INFO: Creating the workload cluster with name "capz-e2e-n8mc8a-aks" using the "aks-multi-tenancy" template (Kubernetes v1.22.6, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-e2e-n8mc8a-aks --infrastructure (default) --kubernetes-version v1.22.6 --control-plane-machine-count 1 --worker-machine-count 1 --flavor aks-multi-tenancy INFO: Applying the cluster template yaml to the cluster cluster.cluster.x-k8s.io/capz-e2e-n8mc8a-aks created azuremanagedcontrolplane.infrastructure.cluster.x-k8s.io/capz-e2e-n8mc8a-aks created azuremanagedcluster.infrastructure.cluster.x-k8s.io/capz-e2e-n8mc8a-aks created machinepool.cluster.x-k8s.io/agentpool0 created azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool0 created machinepool.cluster.x-k8s.io/agentpool1 created azuremanagedmachinepool.infrastructure.cluster.x-k8s.io/agentpool1 created azureclusteridentity.infrastructure.cluster.x-k8s.io/cluster-identity created INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized Apr 11 20:32:26.946: INFO: Waiting for the first control plane machine managed by capz-e2e-n8mc8a/capz-e2e-n8mc8a-aks to be provisioned �[1mSTEP�[0m: Waiting for atleast one control plane node to exist INFO: Waiting for control plane to be ready Apr 11 20:51:28.584: INFO: Waiting for the first control plane machine managed by capz-e2e-n8mc8a/capz-e2e-n8mc8a-aks to be provisioned �[1mSTEP�[0m: Waiting for all control plane nodes to exist INFO: Waiting for the machine deployments to be provisioned INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: Waiting for the machine pool workload nodes to exist �[1mSTEP�[0m: Waiting for the machine pool workload nodes to exist Apr 11 20:51:28.963: INFO: want 2 instances, found 0 ready and 0 available. generation: 1, observedGeneration: 0 Apr 11 20:51:33.985: INFO: want 2 instances, found 2 ready and 2 available. generation: 1, observedGeneration: 1 Apr 11 20:51:34.006: INFO: mapping nsenter pods to hostnames for host-by-host execution Apr 11 20:51:34.006: INFO: found host aks-agentpool1-16760008-vmss000000 with pod nsenter-dqhzj Apr 11 20:51:34.006: INFO: found host aks-agentpool0-16760008-vmss000000 with pod nsenter-rkrcn �[1mSTEP�[0m: checking that time synchronization is healthy on aks-agentpool1-16760008-vmss000000 �[1mSTEP�[0m: checking that time synchronization is healthy on aks-agentpool1-16760008-vmss000000 �[1mSTEP�[0m: time sync OK for host aks-agentpool1-16760008-vmss000000 �[1mSTEP�[0m: time sync OK for host aks-agentpool1-16760008-vmss000000 �[1mSTEP�[0m: time sync OK for host aks-agentpool1-16760008-vmss000000 �[1mSTEP�[0m: time sync OK for host aks-agentpool1-16760008-vmss000000 �[1mSTEP�[0m: Dumping logs from the "capz-e2e-n8mc8a-aks" workload cluster �[1mSTEP�[0m: Dumping workload cluster capz-e2e-n8mc8a/capz-e2e-n8mc8a-aks logs Apr 11 20:51:34.974: INFO: INFO: Collecting logs for node aks-agentpool1-16760008-vmss000000 in cluster capz-e2e-n8mc8a-aks in namespace capz-e2e-n8mc8a Apr 11 20:53:45.377: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool0, cluster capz-e2e-n8mc8a/capz-e2e-n8mc8a-aks: [dialing public load balancer at capz-e2e-n8mc8a-aks-62c552a4.hcp.northcentralus.azmk8s.io: dial tcp 52.252.137.163:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] Apr 11 20:53:46.009: INFO: INFO: Collecting logs for node aks-agentpool1-16760008-vmss000000 in cluster capz-e2e-n8mc8a-aks in namespace capz-e2e-n8mc8a Apr 11 20:55:56.453: INFO: INFO: Collecting boot logs for VMSS instance 0 of scale set 0 Failed to get logs for machine pool agentpool1, cluster capz-e2e-n8mc8a/capz-e2e-n8mc8a-aks: [dialing public load balancer at capz-e2e-n8mc8a-aks-62c552a4.hcp.northcentralus.azmk8s.io: dial tcp 52.252.137.163:22: connect: connection timed out, failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource '0' not found."] �[1mSTEP�[0m: Dumping workload cluster capz-e2e-n8mc8a/capz-e2e-n8mc8a-aks kube-system pod logs �[1mSTEP�[0m: Fetching kube-system pod logs took 285.641623ms �[1mSTEP�[0m: Dumping workload cluster capz-e2e-n8mc8a/capz-e2e-n8mc8a-aks Azure activity log �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-84nzv, container azuredisk �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-845757d86-g7xs8, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-tx24r, container node-driver-registrar �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-autoscaler-7d56cd888-qrsgw, container autoscaler �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-2dtbk, container node-driver-registrar �[1mSTEP�[0m: Creating log watcher for controller kube-system/metrics-server-6576d9ccf8-nt7kn, container metrics-server �[1mSTEP�[0m: Creating log watcher for controller kube-system/azure-ip-masq-agent-wbpw8, container azure-ip-masq-agent �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-xqrf7, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-84nzv, container liveness-probe �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-mfr29, container azurefile �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-84nzv, container node-driver-registrar �[1mSTEP�[0m: Creating log watcher for controller kube-system/coredns-845757d86-gt68f, container coredns �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-2dtbk, container azuredisk �[1mSTEP�[0m: Creating log watcher for controller kube-system/cloud-node-manager-lfr7b, container cloud-node-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-tx24r, container liveness-probe �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azuredisk-node-2dtbk, container liveness-probe �[1mSTEP�[0m: Creating log watcher for controller kube-system/cloud-node-manager-mz8g9, container cloud-node-manager �[1mSTEP�[0m: Creating log watcher for controller kube-system/tunnelfront-66d894875c-v7skp, container tunnel-front �[1mSTEP�[0m: Creating log watcher for controller kube-system/kube-proxy-79gxj, container kube-proxy �[1mSTEP�[0m: Creating log watcher for controller kube-system/azure-ip-masq-agent-f2kf7, container azure-ip-masq-agent �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-tx24r, container azurefile �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-mfr29, container node-driver-registrar �[1mSTEP�[0m: Creating log watcher for controller kube-system/csi-azurefile-node-mfr29, container liveness-probe �[1mSTEP�[0m: Fetching activity logs took 531.19285ms �[1mSTEP�[0m: Dumping all the Cluster API resources in the "capz-e2e-n8mc8a" namespace �[1mSTEP�[0m: Deleting all clusters in the capz-e2e-n8mc8a namespace �[1mSTEP�[0m: Deleting cluster capz-e2e-n8mc8a-aks INFO: Waiting for the Cluster capz-e2e-n8mc8a/capz-e2e-n8mc8a-aks to be deleted �[1mSTEP�[0m: Waiting for cluster capz-e2e-n8mc8a-aks to be deleted �[1mSTEP�[0m: Redacting sensitive information from logs
Filter through log files | View test history on testgrid
capz-e2e Workload cluster creation Creating a GPU-enabled cluster with a single control plane node and 1 node
capz-e2e Workload cluster creation Creating a VMSS cluster with a single control plane node and an AzureMachinePool with 2 nodes
capz-e2e Workload cluster creation Creating a Windows Enabled cluster With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
capz-e2e Workload cluster creation Creating a cluster that uses the external cloud provider with a 1 control plane nodes and 2 worker nodes
capz-e2e Workload cluster creation Creating a ipv6 control-plane cluster With ipv6 worker node
capz-e2e Workload cluster creation With 3 control-plane nodes and 2 worker nodes
capz-e2e Conformance Tests conformance-tests
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a HA cluster using scale in rollout Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the KCP upgrade spec in a single control plane cluster Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
capz-e2e Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e Running the Cluster API E2E tests Running the self-hosted spec Should pivot the bootstrap cluster to a self-hosted cluster
capz-e2e Running the Cluster API E2E tests Should adopt up-to-date control plane Machines without modification Should adopt up-to-date control plane Machines without modification
capz-e2e Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time