Recent runs || View in Spyglass
PR | oprinmarius: Apply Windows Exporter manifests after Prometheus Core Manifests |
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 36m12s |
Revision | 298fb3b315ae042ab2c7598e0c526aa62ba65c27 |
Refs |
2051 |
... skipping 672 lines ... certificate.cert-manager.io "selfsigned-cert" deleted # Create secret for AzureClusterIdentity ./hack/create-identity-secret.sh make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[2]: Nothing to be done for 'kubectl'. make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Error from server (NotFound): secrets "cluster-identity-secret" not found secret/cluster-identity-secret created secret/cluster-identity-secret labeled # Deploy CAPI curl --retry 3 -sSL https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.2/cluster-api-components.yaml | /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/envsubst-v2.0.0-20210730161058-179042472c46 | kubectl apply -f - namespace/capi-system created customresourcedefinition.apiextensions.k8s.io/clusterclasses.cluster.x-k8s.io created ... skipping 124 lines ... # Wait for the kubeconfig to become available. timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-de0bpq-kubeconfig; do sleep 1; done" capz-de0bpq-kubeconfig cluster.x-k8s.io/secret 1 1s # Get kubeconfig and store it locally. kubectl get secrets capz-de0bpq-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done" error: the server doesn't have a resource type "nodes" capz-de0bpq-control-plane-v8b4g NotReady <none> 1s v1.22.1 run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' Waiting for 1 control plane machine(s), 2 worker machine(s), and windows machine(s) to become Ready node/capz-de0bpq-control-plane-v8b4g condition met node/capz-de0bpq-md-0-46tts condition met ... skipping 46 lines ... go: downloading google.golang.org/grpc v1.36.0 go: downloading sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.22 go: downloading google.golang.org/genproto v0.0.0-20210312152112-fc591d9ea70f I0504 21:43:20.912002 27023 network_performance_measurement.go:87] Registering Network Performance Measurement I0504 21:43:21.089934 27023 clusterloader.go:152] ClusterConfig.Nodes set to 3 I0504 21:43:21.129288 27023 clusterloader.go:158] ClusterConfig.MasterName set to capz-de0bpq-control-plane-v8b4g E0504 21:43:21.169410 27023 clusterloader.go:169] Getting master external ip error: didn't find any ExternalIP master IPs I0504 21:43:21.211731 27023 clusterloader.go:176] ClusterConfig.MasterInternalIP set to [10.0.0.4] I0504 21:43:21.211929 27023 clusterloader.go:268] Using config: {ClusterConfig:{KubeConfigPath:/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig RunFromCluster:false Nodes:3 Provider:0xc0000496a0 EtcdCertificatePath:/etc/srv/kubernetes/pki/etcd-apiserver-server.crt EtcdKeyPath:/etc/srv/kubernetes/pki/etcd-apiserver-server.key EtcdInsecurePort:2382 MasterIPs:[] MasterInternalIPs:[10.0.0.4] MasterName:capz-de0bpq-control-plane-v8b4g DeleteStaleNamespaces:false DeleteAutomanagedNamespaces:true APIServerPprofByClientEnabled:true KubeletPort:10250 K8SClientsNumber:1 SkipClusterVerification:false} ReportDir:reports EnableExecService:true ModifierConfig:{OverwriteTestConfig:[] SkipSteps:[]} PrometheusConfig:{TearDownServer:true EnableServer:true EnablePushgateway:false ScrapeEtcd:false ScrapeNodeExporter:false ScrapeWindowsNodeExporter:true ScrapeKubelets:false ScrapeMasterKubelets:false ScrapeKubeProxy:true ScrapeKubeStateMetrics:false ScrapeMetricsServerMetrics:false ScrapeNodeLocalDNS:false ScrapeAnet:false ScrapeCiliumOperator:false APIServerScrapePort:443 SnapshotProject: ManifestPath:$GOPATH/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests CoreManifests:$GOPATH/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/*.yaml DefaultServiceMonitors:$GOPATH/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/default/*.yaml KubeStateMetricsManifests:$GOPATH/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/exporters/kube-state-metrics/*.yaml MasterIPServiceMonitors:$GOPATH/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/*.yaml MetricsServerManifests:$GOPATH/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/exporters/metrics-server/*.yaml NodeExporterPod:$GOPATH/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/exporters/node_exporter/node-exporter.yaml WindowsNodeExporterManifests:$GOPATH/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/exporters/windows_node_exporter/*.yaml PushgatewayManifests:$GOPATH/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/pushgateway/*.yaml StorageClassProvisioner:kubernetes.io/azure-disk StorageClassVolumeType:StandardSSD_LRS ReadyTimeout:15m0s} OverridePaths:[]} I0504 21:43:21.254485 27023 cluster.go:74] Listing cluster nodes: I0504 21:43:21.254556 27023 cluster.go:86] Name: capz-de0b-clsj6, clusterIP: 10.1.0.6, externalIP: , isSchedulable: false I0504 21:43:21.254565 27023 cluster.go:86] Name: capz-de0b-tb9j2, clusterIP: 10.1.0.5, externalIP: , isSchedulable: false I0504 21:43:21.254571 27023 cluster.go:86] Name: capz-de0bpq-control-plane-v8b4g, clusterIP: 10.0.0.4, externalIP: , isSchedulable: true ... skipping 53 lines ... I0504 21:43:28.273738 27023 framework.go:274] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/default/prometheus-serviceMonitorLegacyKubeDNS.yaml I0504 21:43:28.315223 27023 prometheus.go:327] Exposing kube-apiserver metrics in the cluster I0504 21:43:28.460649 27023 framework.go:274] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml I0504 21:43:28.502536 27023 framework.go:274] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml I0504 21:43:28.538296 27023 framework.go:274] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml I0504 21:43:28.570326 27023 prometheus.go:406] Waiting for Prometheus stack to become healthy... W0504 21:43:58.601673 27023 util.go:72] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090), response: "k8s\x00\n\f\n\x02v1\x12\x06Status\x12]\n\x06\n\x00\x12\x00\x1a\x00\x12\aFailure\x1a3no endpoints available for service \"prometheus-k8s\"\"\x12ServiceUnavailable0\xf7\x03\x1a\x00\"\x00" I0504 21:44:28.618446 27023 util.go:101] 5/9 targets are ready, example not ready target: {map[container:windows-exporter endpoint:http instance:10.1.0.5:9182 job:windows-exporter namespace:monitoring pod:windows-exporter-jttvn service:windows-exporter] unknown} I0504 21:44:58.619559 27023 util.go:101] 6/9 targets are ready, example not ready target: {map[endpoint:kube-controller-manager instance:10.0.0.4:10257 job:master namespace:monitoring service:master] down} I0504 21:45:28.615311 27023 util.go:101] 6/9 targets are ready, example not ready target: {map[endpoint:kube-controller-manager instance:10.0.0.4:10257 job:master namespace:monitoring service:master] down} I0504 21:45:58.617218 27023 util.go:101] 6/9 targets are ready, example not ready target: {map[endpoint:kube-controller-manager instance:10.0.0.4:10257 job:master namespace:monitoring service:master] down} I0504 21:46:28.630999 27023 util.go:101] 6/9 targets are ready, example not ready target: {map[endpoint:kube-controller-manager instance:10.0.0.4:10257 job:master namespace:monitoring service:master] down} I0504 21:46:58.619273 27023 util.go:101] 6/9 targets are ready, example not ready target: {map[endpoint:kube-controller-manager instance:10.0.0.4:10257 job:master namespace:monitoring service:master] down} ... skipping 126 lines ... "eventTime": null, "reportingComponent": "", "reportingInstance": "" } ] } F0504 21:58:28.698197 27023 clusterloader.go:300] Error while setting up prometheus stack: timed out waiting for the condition exit status 1 make: Nothing to be done for 'kubectl'. ================ DUMPING LOGS FOR MANAGEMENT CLUSTER ================ Exported logs for cluster "capz" to: /logs/artifacts/management-cluster ================ DUMPING LOGS FOR WORKLOAD CLUSTER (Linux) ========== ... skipping 21 lines ... Deploying log-dump-daemonset-windows daemonset.apps/log-dump-node-windows created Waiting for log-dump-daemonset-windows pod/log-dump-node-windows-br6gb condition met pod/log-dump-node-windows-kq5ng condition met Getting logs for node capz-de0b-tb9j2 C:\var\log\kubelet\kubelet.exe.capz-de0b-tb9j2.WORKGROUP_capz-de0b-tb9j2$.log.ERROR.20220504-214230.4984 C:\var\log\kubelet\kubelet.exe.capz-de0b-tb9j2.WORKGROUP_capz-de0b-tb9j2$.log.INFO.20220504-214229.4984 C:\var\log\kubelet\kubelet.exe.capz-de0b-tb9j2.WORKGROUP_capz-de0b-tb9j2$.log.WARNING.20220504-214230.4984 C:\var\log\kubelet\kubelet.exe.ERROR C:\var\log\kubelet\kubelet.exe.INFO C:\var\log\kubelet\kubelet.exe.WARNING 6 File(s) copied C:\var\log\pods\default_log-dump-node-windows-br6gb_00d06571-4e2c-462c-8601-22acf96edb17\log-dump-node-windows\0.log C:\var\log\pods\kube-system_calico-node-windows-5m8gd_f4a9b7d4-a5c2-4627-9478-78515f3a6f66\calico-node-felix\0.log C:\var\log\pods\kube-system_calico-node-windows-5m8gd_f4a9b7d4-a5c2-4627-9478-78515f3a6f66\calico-node-felix\1.log C:\var\log\pods\kube-system_calico-node-windows-5m8gd_f4a9b7d4-a5c2-4627-9478-78515f3a6f66\calico-node-startup\0.log C:\var\log\pods\kube-system_calico-node-windows-5m8gd_f4a9b7d4-a5c2-4627-9478-78515f3a6f66\install-cni\0.log C:\var\log\pods\monitoring_windows-exporter-jttvn_69246270-eab1-4acb-bc58-a3e0667f98b5\configure-firewall\0.log C:\var\log\pods\monitoring_windows-exporter-jttvn_69246270-eab1-4acb-bc58-a3e0667f98b5\windows-exporter\0.log 7 File(s) copied Collecting pod logs Getting logfile C:\log\kubelet.exe.capz-de0b-tb9j2.WORKGROUP_capz-de0b-tb9j2$.log.ERROR.20220504-214230.4984 Getting logfile C:\log\kubelet.exe.capz-de0b-tb9j2.WORKGROUP_capz-de0b-tb9j2$.log.INFO.20220504-214229.4984 Getting logfile C:\log\kubelet.exe.capz-de0b-tb9j2.WORKGROUP_capz-de0b-tb9j2$.log.WARNING.20220504-214230.4984 Getting logfile C:\log\kubelet.exe.ERROR Getting logfile C:\log\kubelet.exe.INFO Getting logfile C:\log\kubelet.exe.WARNING Getting logfile C:\log\default_log-dump-node-windows-br6gb_00d06571-4e2c-462c-8601-22acf96edb17\log-dump-node-windows\0.log Getting logfile C:\log\kube-system_calico-node-windows-5m8gd_f4a9b7d4-a5c2-4627-9478-78515f3a6f66\calico-node-felix\0.log Getting logfile C:\log\kube-system_calico-node-windows-5m8gd_f4a9b7d4-a5c2-4627-9478-78515f3a6f66\calico-node-felix\1.log Getting logfile C:\log\kube-system_calico-node-windows-5m8gd_f4a9b7d4-a5c2-4627-9478-78515f3a6f66\calico-node-startup\0.log Getting logfile C:\log\kube-system_calico-node-windows-5m8gd_f4a9b7d4-a5c2-4627-9478-78515f3a6f66\install-cni\0.log Getting logfile C:\log\monitoring_windows-exporter-jttvn_69246270-eab1-4acb-bc58-a3e0667f98b5\configure-firewall\0.log Getting logfile C:\log\monitoring_windows-exporter-jttvn_69246270-eab1-4acb-bc58-a3e0667f98b5\windows-exporter\0.log Exported logs for node "capz-de0b-tb9j2" Getting logs for node capz-de0b-clsj6 C:\var\log\kubelet\kubelet.exe.capz-de0b-clsj6.WORKGROUP_capz-de0b-clsj6$.log.ERROR.20220504-214236.3812 C:\var\log\kubelet\kubelet.exe.capz-de0b-clsj6.WORKGROUP_capz-de0b-clsj6$.log.INFO.20220504-214236.3812 C:\var\log\kubelet\kubelet.exe.capz-de0b-clsj6.WORKGROUP_capz-de0b-clsj6$.log.WARNING.20220504-214236.3812 C:\var\log\kubelet\kubelet.exe.ERROR C:\var\log\kubelet\kubelet.exe.INFO C:\var\log\kubelet\kubelet.exe.WARNING 6 File(s) copied C:\var\log\pods\default_log-dump-node-windows-kq5ng_7823ec83-3ab3-47f8-98f0-023b85d18583\log-dump-node-windows\0.log C:\var\log\pods\kube-system_calico-node-windows-r7lpz_902f7144-cabe-4fb3-8890-f3bf91d6a785\calico-node-felix\0.log C:\var\log\pods\kube-system_calico-node-windows-r7lpz_902f7144-cabe-4fb3-8890-f3bf91d6a785\calico-node-felix\1.log C:\var\log\pods\kube-system_calico-node-windows-r7lpz_902f7144-cabe-4fb3-8890-f3bf91d6a785\calico-node-startup\0.log C:\var\log\pods\kube-system_calico-node-windows-r7lpz_902f7144-cabe-4fb3-8890-f3bf91d6a785\install-cni\0.log C:\var\log\pods\monitoring_windows-exporter-szhms_b28d88bc-6fec-4c45-b078-fa86e5a9f1fe\configure-firewall\0.log C:\var\log\pods\monitoring_windows-exporter-szhms_b28d88bc-6fec-4c45-b078-fa86e5a9f1fe\windows-exporter\0.log 7 File(s) copied Collecting pod logs Getting logfile C:\log\kubelet.exe.capz-de0b-clsj6.WORKGROUP_capz-de0b-clsj6$.log.ERROR.20220504-214236.3812 Getting logfile C:\log\kubelet.exe.capz-de0b-clsj6.WORKGROUP_capz-de0b-clsj6$.log.INFO.20220504-214236.3812 Getting logfile C:\log\kubelet.exe.capz-de0b-clsj6.WORKGROUP_capz-de0b-clsj6$.log.WARNING.20220504-214236.3812 Getting logfile C:\log\kubelet.exe.ERROR Getting logfile C:\log\kubelet.exe.INFO Getting logfile C:\log\kubelet.exe.WARNING Getting logfile C:\log\default_log-dump-node-windows-kq5ng_7823ec83-3ab3-47f8-98f0-023b85d18583\log-dump-node-windows\0.log Getting logfile C:\log\kube-system_calico-node-windows-r7lpz_902f7144-cabe-4fb3-8890-f3bf91d6a785\calico-node-felix\0.log Getting logfile C:\log\kube-system_calico-node-windows-r7lpz_902f7144-cabe-4fb3-8890-f3bf91d6a785\calico-node-felix\1.log Getting logfile C:\log\kube-system_calico-node-windows-r7lpz_902f7144-cabe-4fb3-8890-f3bf91d6a785\calico-node-startup\0.log ... skipping 24 lines ...