This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmonianshouhou: [release-1.3] Help windows cloud-node-manager to be better provisioned
Resultsuccess
Tests 0 failed / 4 succeeded
Started2022-09-08 13:18
Elapsed1h25m
Revision
Refs 2627
uploadercrier

No Test Failures!


Show 4 Passed Tests

Show 19 Skipped Tests

Error lines from build-log.txt

... skipping 519 lines ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
INFO: The kubeconfig file for the kind cluster is /tmp/e2e-kind2714556682
INFO: Loading image: "capzci.azurecr.io/cluster-api-azure-controller-amd64:20220908131846"
INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/cluster-api-controller:v1.1.4" to "/tmp/image-tar356073483/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.1.4" to "/tmp/image-tar1995326561/image.tar": unable to read image data: Error response from daemon: reference does not exist
INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4"
INFO: [WARNING] Unable to load image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4" into the kind cluster "capz-e2e": error saving image "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.1.4" to "/tmp/image-tar4108970922/image.tar": unable to read image data: Error response from daemon: reference does not exist
STEP: Initializing the bootstrap cluster
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure azure
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-8447dbccc5-nlklk, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
... skipping 10 lines ...
  With ipv6 worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:271

INFO: "With ipv6 worker node" started at Thu, 08 Sep 2022 13:29:02 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-ftlocf" for hosting the cluster
Sep  8 13:29:02.318: INFO: starting to create namespace for hosting the "capz-e2e-ftlocf" test spec
2022/09/08 13:29:02 failed trying to get namespace (capz-e2e-ftlocf):namespaces "capz-e2e-ftlocf" not found
INFO: Creating namespace capz-e2e-ftlocf
INFO: Creating event watcher for namespace "capz-e2e-ftlocf"
Sep  8 13:29:02.352: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-ftlocf-ipv6
INFO: Creating the workload cluster with name "capz-e2e-ftlocf-ipv6" using the "ipv6" template (Kubernetes v1.22.13, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 129 lines ...
STEP: Fetching activity logs took 1.706072659s
STEP: Dumping all the Cluster API resources in the "capz-e2e-ftlocf" namespace
STEP: Deleting all clusters in the capz-e2e-ftlocf namespace
STEP: Deleting cluster capz-e2e-ftlocf-ipv6
INFO: Waiting for the Cluster capz-e2e-ftlocf/capz-e2e-ftlocf-ipv6 to be deleted
STEP: Waiting for cluster capz-e2e-ftlocf-ipv6 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-zzjdc, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-cl5fx, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-nd8lv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-42868, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ftlocf-ipv6-control-plane-wlx7p, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ftlocf-ipv6-control-plane-wlx7p, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ftlocf-ipv6-control-plane-lsf5v, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ftlocf-ipv6-control-plane-lsf5v, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ftlocf-ipv6-control-plane-wlx7p, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ftlocf-ipv6-control-plane-wlx7p, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-j9jx2, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6gxbl, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ftlocf-ipv6-control-plane-lsf5v, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-9qgqb, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ftlocf-ipv6-control-plane-lsf5v, container kube-controller-manager: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ftlocf
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 19m43s on Ginkgo node 1 of 3

... skipping 10 lines ...
  Creates a public management cluster in a custom vnet
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:145

INFO: "Creates a public management cluster in a custom vnet" started at Thu, 08 Sep 2022 13:29:00 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-1bkxxd" for hosting the cluster
Sep  8 13:29:00.412: INFO: starting to create namespace for hosting the "capz-e2e-1bkxxd" test spec
2022/09/08 13:29:00 failed trying to get namespace (capz-e2e-1bkxxd):namespaces "capz-e2e-1bkxxd" not found
INFO: Creating namespace capz-e2e-1bkxxd
INFO: Creating event watcher for namespace "capz-e2e-1bkxxd"
Sep  8 13:29:00.450: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-1bkxxd-public-custom-vnet
STEP: creating Azure clients with the workload cluster's subscription
STEP: creating a resource group
... skipping 107 lines ...
STEP: Collecting events for Pod kube-system/calico-node-kctgp
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8
STEP: Creating log watcher for controller kube-system/kube-proxy-lhcn9, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-2lxhh
STEP: Collecting events for Pod kube-system/kube-proxy-lhcn9
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8, container kube-apiserver
STEP: failed to find events of Pod "etcd-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8"
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8, container kube-scheduler
STEP: failed to find events of Pod "kube-controller-manager-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8"
STEP: failed to find events of Pod "kube-scheduler-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8"
STEP: Fetching activity logs took 3.404495159s
STEP: Dumping all the Cluster API resources in the "capz-e2e-1bkxxd" namespace
STEP: Deleting all clusters in the capz-e2e-1bkxxd namespace
STEP: Deleting cluster capz-e2e-1bkxxd-public-custom-vnet
INFO: Waiting for the Cluster capz-e2e-1bkxxd/capz-e2e-1bkxxd-public-custom-vnet to be deleted
STEP: Waiting for cluster capz-e2e-1bkxxd-public-custom-vnet to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-72rjc, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tdvvs, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-1bkxxd-public-custom-vnet-control-plane-29zv8, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-vq5wl, container calico-kube-controllers: http2: client connection lost
W0908 14:10:56.417305   30433 reflector.go:442] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: watch of *v1.Event ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
STEP: Got error while streaming logs for pod kube-system/calico-node-kctgp, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-lhcn9, container kube-proxy: http2: client connection lost
W0908 14:11:27.220491   30433 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp 20.23.176.50:6443: i/o timeout
I0908 14:11:27.220643   30433 trace.go:205] Trace[496423054]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167 (08-Sep-2022 14:10:57.219) (total time: 30001ms):
Trace[496423054]: ---"Objects listed" error:Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp 20.23.176.50:6443: i/o timeout 30001ms (14:11:27.220)
Trace[496423054]: [30.001399985s] [30.001399985s] END
E0908 14:11:27.220721   30433 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp 20.23.176.50:6443: i/o timeout
W0908 14:11:59.062353   30433 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp 20.23.176.50:6443: i/o timeout
I0908 14:11:59.062489   30433 trace.go:205] Trace[768089203]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167 (08-Sep-2022 14:11:29.060) (total time: 30001ms):
Trace[768089203]: ---"Objects listed" error:Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp 20.23.176.50:6443: i/o timeout 30001ms (14:11:59.062)
Trace[768089203]: [30.001822197s] [30.001822197s] END
E0908 14:11:59.062523   30433 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp 20.23.176.50:6443: i/o timeout
W0908 14:12:35.340634   30433 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp 20.23.176.50:6443: i/o timeout
I0908 14:12:35.340808   30433 trace.go:205] Trace[2064569205]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167 (08-Sep-2022 14:12:05.339) (total time: 30001ms):
Trace[2064569205]: ---"Objects listed" error:Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp 20.23.176.50:6443: i/o timeout 30001ms (14:12:35.340)
Trace[2064569205]: [30.001185046s] [30.001185046s] END
E0908 14:12:35.340854   30433 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp 20.23.176.50:6443: i/o timeout
W0908 14:12:45.439009   30433 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp: lookup capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0908 14:12:45.439089   30433 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp: lookup capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-1bkxxd
STEP: Running additional cleanup for the "create-workload-cluster" test spec
Sep  8 14:12:58.639: INFO: deleting an existing virtual network "custom-vnet"
W0908 14:13:08.470634   30433 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp: lookup capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0908 14:13:08.470749   30433 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp: lookup capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
Sep  8 14:13:09.626: INFO: deleting an existing route table "node-routetable"
Sep  8 14:13:12.623: INFO: deleting an existing network security group "node-nsg"
Sep  8 14:13:23.457: INFO: deleting an existing network security group "control-plane-nsg"
Sep  8 14:13:34.456: INFO: verifying the existing resource group "capz-e2e-1bkxxd-public-custom-vnet" is empty
Sep  8 14:13:34.604: INFO: deleting the existing resource group "capz-e2e-1bkxxd-public-custom-vnet"
W0908 14:13:51.694160   30433 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp: lookup capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0908 14:13:51.694267   30433 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp: lookup capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
W0908 14:14:21.988707   30433 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp: lookup capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0908 14:14:21.988854   30433 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp: lookup capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
W0908 14:14:59.475078   30433 reflector.go:324] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp: lookup capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
E0908 14:14:59.475191   30433 reflector.go:138] pkg/mod/k8s.io/client-go@v0.23.5/tools/cache/reflector.go:167: Failed to watch *v1.Event: failed to list *v1.Event: Get "https://capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com:6443/api/v1/namespaces/capz-e2e-1bkxxd/events?resourceVersion=7792": dial tcp: lookup capz-e2e-1bkxxd-public-custom-vnet-66979ae7.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host
INFO: "Creates a public management cluster in a custom vnet" ran for 46m43s on Ginkgo node 3 of 3


• [SLOW TEST:2802.969 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:44
... skipping 8 lines ...
  With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:197

INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" started at Thu, 08 Sep 2022 13:29:00 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-hg8ee7" for hosting the cluster
Sep  8 13:29:00.998: INFO: starting to create namespace for hosting the "capz-e2e-hg8ee7" test spec
2022/09/08 13:29:01 failed trying to get namespace (capz-e2e-hg8ee7):namespaces "capz-e2e-hg8ee7" not found
INFO: Creating namespace capz-e2e-hg8ee7
INFO: Creating event watcher for namespace "capz-e2e-hg8ee7"
Sep  8 13:29:01.031: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-hg8ee7-ha
INFO: Creating the workload cluster with name "capz-e2e-hg8ee7-ha" using the "(default)" template (Kubernetes v1.22.13, 3 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 72 lines ...
Sep  8 13:42:21.553: INFO: waiting for the external LB service to be deleted: webbcey0p-elb
Sep  8 13:43:08.441: INFO: starting to delete deployment webbcey0p
Sep  8 13:43:08.556: INFO: starting to delete job curl-to-elb-jobedfkh73d7wj
STEP: creating a Kubernetes client to the workload cluster
STEP: Creating development namespace
Sep  8 13:43:08.718: INFO: starting to create dev deployment namespace
2022/09/08 13:43:08 failed trying to get namespace (development):namespaces "development" not found
2022/09/08 13:43:08 namespace development does not exist, creating...
STEP: Creating production namespace
Sep  8 13:43:08.945: INFO: starting to create prod deployment namespace
2022/09/08 13:43:09 failed trying to get namespace (production):namespaces "production" not found
2022/09/08 13:43:09 namespace production does not exist, creating...
STEP: Creating frontendProd, backend and network-policy pod deployments
Sep  8 13:43:09.174: INFO: starting to create frontend-prod deployments
Sep  8 13:43:09.292: INFO: starting to create frontend-dev deployments
Sep  8 13:43:09.407: INFO: starting to create backend deployments
Sep  8 13:43:09.521: INFO: starting to create network-policy deployments
... skipping 11 lines ...
STEP: Ensuring we have outbound internet access from the network-policy pods
STEP: Ensuring we have connectivity from network-policy pods to frontend-prod pods
STEP: Ensuring we have connectivity from network-policy pods to backend pods
STEP: Applying a network policy to deny ingress access to app: webapp, role: backend pods in development namespace
Sep  8 13:43:38.657: INFO: starting to applying a network policy development/backend-deny-ingress to deny access to app: webapp, role: backend pods in development namespace
STEP: Ensuring we no longer have ingress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.6.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep  8 13:45:50.286: INFO: starting to cleaning up network policy development/backend-deny-ingress after ourselves
STEP: Applying a network policy to deny egress access in development namespace
Sep  8 13:45:50.685: INFO: starting to applying a network policy development/backend-deny-egress to deny egress access in development namespace
STEP: Ensuring we no longer have egress access from the network-policy pods to backend pods
curl: (7) Failed to connect to 192.168.6.69 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.6.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep  8 13:50:12.615: INFO: starting to cleaning up network policy development/backend-deny-egress after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
Sep  8 13:50:13.032: INFO: starting to applying a network policy development/backend-allow-egress-pod-label to allow egress access to app: webapp, role: frontend pods in any namespace from pods with app: webapp, role: backend labels in development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.188.130 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep  8 13:52:25.735: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-label after ourselves
STEP: Applying a network policy to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
Sep  8 13:52:26.153: INFO: starting to applying a network policy development/backend-allow-egress-pod-namespace-label to allow egress access to app: webapp, role: frontend pods from pods with app: webapp, role: backend labels in same development namespace
STEP: Ensuring we have egress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.188.129 port 80: Connection timed out

curl: (7) Failed to connect to 192.168.188.130 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep  8 13:56:49.928: INFO: starting to cleaning up network policy development/backend-allow-egress-pod-namespace-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
Sep  8 13:56:50.359: INFO: starting to applying a network policy development/backend-allow-ingress-pod-label to only allow ingress access to app: webapp, role: backend pods in development namespace from pods in any namespace with the same labels
STEP: Ensuring we have ingress access from pods with matching labels
STEP: Ensuring we don't have ingress access from pods without matching labels
curl: (7) Failed to connect to 192.168.6.69 port 80: Connection timed out

STEP: Cleaning up after ourselves
Sep  8 13:59:02.858: INFO: starting to cleaning up network policy development/backend-allow-ingress-pod-label after ourselves
STEP: Applying a network policy to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
Sep  8 13:59:03.311: INFO: starting to applying a network policy development/backend-policy-allow-ingress-pod-namespace-label to only allow ingress access to app: webapp role:backends in development namespace from pods with label app:webapp, role: frontendProd within namespace with label purpose: development
STEP: Ensuring we don't have ingress access from role:frontend pods in production namespace
curl: (7) Failed to connect to 192.168.6.69 port 80: Connection timed out

STEP: Ensuring we have ingress access from role:frontend pods in development namespace
STEP: creating a Kubernetes client to the workload cluster
STEP: creating an HTTP deployment
STEP: waiting for deployment default/web-windows3h43vr to be available
Sep  8 14:01:15.406: INFO: starting to wait for deployment to become available
Sep  8 14:02:16.225: INFO: Deployment default/web-windows3h43vr is now available, took 1m0.818910157s
... skipping 21 lines ...
STEP: waiting for job default/curl-to-elb-joblp3qtkpv1jt to be complete
Sep  8 14:05:12.786: INFO: waiting for job default/curl-to-elb-joblp3qtkpv1jt to be complete
Sep  8 14:05:23.009: INFO: job default/curl-to-elb-joblp3qtkpv1jt is complete, took 10.223309655s
STEP: connecting directly to the external LB service
Sep  8 14:05:23.009: INFO: starting attempts to connect directly to the external LB service
2022/09/08 14:05:23 [DEBUG] GET http://20.238.150.114
2022/09/08 14:05:53 [ERR] GET http://20.238.150.114 request failed: Get "http://20.238.150.114": dial tcp 20.238.150.114:80: i/o timeout
2022/09/08 14:05:53 [DEBUG] GET http://20.238.150.114: retrying in 1s (4 left)
Sep  8 14:05:54.229: INFO: successfully connected to the external LB service
STEP: deleting the test resources
Sep  8 14:05:54.229: INFO: starting to delete external LB service web-windows3h43vr-elb
Sep  8 14:05:54.387: INFO: waiting for the external LB service to be deleted: web-windows3h43vr-elb
Sep  8 14:06:41.183: INFO: starting to delete deployment web-windows3h43vr
... skipping 21 lines ...
Sep  8 14:08:06.710: INFO: Collecting boot logs for AzureMachine capz-e2e-hg8ee7-ha-md-0-tz5bh

Sep  8 14:08:07.252: INFO: Collecting logs for Windows node capz-e2e-4tcvd in cluster capz-e2e-hg8ee7-ha in namespace capz-e2e-hg8ee7

Sep  8 14:09:37.554: INFO: Collecting boot logs for AzureMachine capz-e2e-hg8ee7-ha-md-win-4tcvd

Failed to get logs for machine capz-e2e-hg8ee7-ha-md-win-6cb8c84797-47t8p, cluster capz-e2e-hg8ee7/capz-e2e-hg8ee7-ha: running command "Get-Content "C:\\cni.log"": Process exited with status 1
Sep  8 14:09:38.737: INFO: Collecting logs for Windows node capz-e2e-87tlr in cluster capz-e2e-hg8ee7-ha in namespace capz-e2e-hg8ee7

Sep  8 14:10:08.221: INFO: Collecting boot logs for AzureMachine capz-e2e-hg8ee7-ha-md-win-87tlr

STEP: Dumping workload cluster capz-e2e-hg8ee7/capz-e2e-hg8ee7-ha kube-system pod logs
STEP: Collecting events for Pod kube-system/calico-node-5wnrn
... skipping 69 lines ...
STEP: Fetching activity logs took 5.076178843s
STEP: Dumping all the Cluster API resources in the "capz-e2e-hg8ee7" namespace
STEP: Deleting all clusters in the capz-e2e-hg8ee7 namespace
STEP: Deleting cluster capz-e2e-hg8ee7-ha
INFO: Waiting for the Cluster capz-e2e-hg8ee7/capz-e2e-hg8ee7-ha to be deleted
STEP: Waiting for cluster capz-e2e-hg8ee7-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/csi-proxy-h6fsk, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6whvx, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-6x2pb, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-hgjvg, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-hg8ee7-ha-control-plane-9d6x2, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-7kdzw, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-hg8ee7-ha-control-plane-k9lpw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-mvh5r, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-k768w, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-pq6r6, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-k768w, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-hg8ee7-ha-control-plane-zp9gt, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-prftp, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-hg8ee7-ha-control-plane-k9lpw, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-6x2pb, container calico-node-felix: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-5wnrn, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-hg8ee7-ha-control-plane-zp9gt, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-hg8ee7-ha-control-plane-k9lpw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-vvbc8, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-6rs5d, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-hg8ee7-ha-control-plane-k9lpw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-hg8ee7-ha-control-plane-zp9gt, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-4mrvj, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-6xz6j, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-g9zg2, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/csi-proxy-24qpp, container csi-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-5s8ps, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-w6hcj, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-lfbjc, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-hg8ee7-ha-control-plane-9d6x2, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/containerd-logger-sl545, container containerd-logger: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-hg8ee7-ha-control-plane-9d6x2, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-hg8ee7-ha-control-plane-zp9gt, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-hg8ee7-ha-control-plane-9d6x2, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-q2t7j, container calico-node: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-hg8ee7
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes" ran for 50m14s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:310

INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" started at Thu, 08 Sep 2022 13:48:45 UTC on Ginkgo node 1 of 3
STEP: Creating namespace "capz-e2e-ct0cm6" for hosting the cluster
Sep  8 13:48:45.297: INFO: starting to create namespace for hosting the "capz-e2e-ct0cm6" test spec
2022/09/08 13:48:45 failed trying to get namespace (capz-e2e-ct0cm6):namespaces "capz-e2e-ct0cm6" not found
INFO: Creating namespace capz-e2e-ct0cm6
INFO: Creating event watcher for namespace "capz-e2e-ct0cm6"
Sep  8 13:48:45.333: INFO: Creating cluster identity secret "cluster-identity-secret"
INFO: Cluster name is capz-e2e-ct0cm6-vmss
INFO: Creating the workload cluster with name "capz-e2e-ct0cm6-vmss" using the "machine-pool" template (Kubernetes v1.22.13, 1 control-plane machines, 2 worker machines)
INFO: Getting the cluster template yaml
... skipping 38 lines ...
Sep  8 14:02:58.548: INFO: job default/kubescape-scan is complete, took 20.333701211s
Output of "kubescape scan framework nsa --enable-host-scan --exclude-namespaces kube-system,kube-public":

Logs for pod kubescape-scan-vhq4z:
ARMO security scanner starting
[progress] Installing host sensor
[Error] failed to init host sensor
Warning: 'kubescape' is not updated to the latest release: 'v2.0.170'%!(EXTRA string=
)[progress] Downloading/Loading policy definitions
[success] Downloaded/Loaded policy
[progress] Accessing Kubernetes objects
W0908 14:02:45.408954       1 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
W0908 14:02:45.441354       1 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
[success] Accessed successfully to Kubernetes objects
[progress] Scanning cluster 
[success] Done scanning cluster 
[control: Allow privilege escalation - https://hub.armo.cloud/docs/c-0016] failed 😥
Description: Attackers may gain access to a container and uplift its privilege to enable excessive capabilities.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: If your application does not need it, make sure the allowPrivilegeEscalation field of the securityContext is set to false.

[control: Allowed hostPath - https://hub.armo.cloud/docs/c-0006] passed 👍
Description: Mounting host directory to the container can be abused to get access to sensitive data and gain persistence on the host machine.
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: Applications credentials in configuration files - https://hub.armo.cloud/docs/c-0012] passed 👍
Description: Attackers who have access to configuration files can steal the stored secrets and use them. This control checks if ConfigMaps or pod specifications have sensitive information in their configuration.
Summary - Passed:3   Excluded:0   Failed:0   Total:3

[control: Audit logs enabled - https://hub.armo.cloud/docs/c-0067] skipped 😕
Description: Audit logging is an important security feature in Kubernetes, it enables the operator to track requests to the cluster. It is important to use it so the operator has a record of events happened in Kubernetes
[control: Automatic mapping of service account - https://hub.armo.cloud/docs/c-0034] failed 😥
Description: Potential attacker may gain access to a POD and steal its service account token. Therefore, it is recommended to disable automatic mapping of the service account tokens in service account configuration and enable it only for PODs that need to use them.
Failed:
   Namescape default
      ServiceAccount - default 
      ServiceAccount - kubescape-discovery 
      Job - kubescape-scan 
   Namescape kube-node-lease
      ServiceAccount - default 
Summary - Passed:0   Excluded:0   Failed:4   Total:4
Remediation: Disable automatic mounting of service account tokens to PODs either at the service account level or at the individual POD level, by specifying the automountServiceAccountToken: false. Note that POD level takes precedence.

[control: CVE-2021-25741 - Using symlink for arbitrary host file system access. - https://hub.armo.cloud/docs/c-0058] skipped 😕
Description: A user may be able to create a container with subPath or subPathExpr volume mounts to access files & directories anywhere on the host filesystem. Following Kubernetes versions are affected: v1.22.0 - v1.22.1, v1.21.0 - v1.21.4, v1.20.0 - v1.20.10, version v1.19.14 and lower. This control checks the vulnerable versions and the actual usage of the subPath feature in all Pods in the cluster. If you want to learn more about the CVE, please refer to the CVE link: https://nvd.nist.gov/vuln/detail/CVE-2021-25741
[control: CVE-2021-25742-nginx-ingress-snippet-annotation-vulnerability - https://hub.armo.cloud/docs/c-0059] skipped 😕
Description: Security issue in ingress-nginx where a user that can create or update ingress objects can use the custom snippets feature to obtain all secrets in the cluster (see more at https://github.com/kubernetes/ingress-nginx/issues/7837)
[control: Cluster internal networking - https://hub.armo.cloud/docs/c-0054] failed 😥
Description: If no network policy is defined, attackers who gain access to a container may use it to move laterally in the cluster. This control lists namespaces in which no network policy is defined.
Failed:
   Namespace - default 
   Namespace - kube-node-lease 
Summary - Passed:0   Excluded:0   Failed:2   Total:2
Remediation: Define Kubernetes network policies or use alternative products to protect cluster network.

[control: Cluster-admin binding - https://hub.armo.cloud/docs/c-0035] failed 😥
Description: Attackers who have cluster admin permissions (can perform any action on any resource), can take advantage of their privileges for malicious activities. This control determines which subjects have cluster admin permissions.
Failed:
   Groups
      Group - system:masters 
Summary - Passed:52   Excluded:0   Failed:1   Total:53
Remediation: You should apply least privilege principle. Make sure cluster admin permissions are granted only when it is absolutely necessary. Don't use subjects with such high permissions for daily operations.

[control: Container hostPort - https://hub.armo.cloud/docs/c-0044] passed 👍
Description: Configuring hostPort requires a particular port number. If two objects specify the same HostPort, they could not be deployed to the same node. It may prevent the second object from starting, even if Kubernetes will try reschedule it on another node, provided there are available nodes with sufficient amount of resources. Also, if the number of replicas of such workload is higher than the number of nodes, the deployment will consistently fail.
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: Control plane hardening - https://hub.armo.cloud/docs/c-0005] skipped 😕
Description: Kubernetes control plane API is running with non-secure port enabled which allows attackers to gain unprotected access to the cluster.
[control: Disable anonymous access to Kubelet service - https://hub.armo.cloud/docs/c-0069] skipped 😕
Description: By default, requests to the kubelet's HTTPS endpoint that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated.
[control: Enforce Kubelet client TLS authentication - https://hub.armo.cloud/docs/c-0070] skipped 😕
Description: Kubelets are the node level orchestrator in Kubernetes control plane. They are publishing service port 10250 where they accept commands from API server. Operator must make sure that only API server is allowed to submit commands to Kubelet. This is done through client certificate verification, must configure Kubelet with client CA file to use for this purpose.
[control: Exec into container - https://hub.armo.cloud/docs/c-0002] failed 😥
Description: Attackers with relevant permissions can run malicious commands in the context of legitimate containers in the cluster using “kubectl exec” command. This control determines which subjects have permissions to use this command.
Failed:
   Groups
      Group - system:masters 
Summary - Passed:52   Excluded:0   Failed:1   Total:53
Remediation: It is recommended to prohibit “kubectl exec” command in production environments. It is also recommended not to use subjects with this permission for daily cluster operations.

[control: Exposed dashboard - https://hub.armo.cloud/docs/c-0047] skipped 😕
Description: Kubernetes dashboard versions before v2.0.1 do not support user authentication. If exposed externally, it will allow unauthenticated remote management of the cluster. This control checks presence of the kubernetes-dashboard deployment and its version number.
[control: Host PID/IPC privileges - https://hub.armo.cloud/docs/c-0038] passed 👍
Description: Containers should be isolated from the host machine as much as possible. The hostPID and hostIPC fields in deployment yaml may allow cross-container influence and may expose the host itself to potentially malicious or destructive actions. This control identifies all PODs using hostPID or hostIPC privileges.
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: HostNetwork access - https://hub.armo.cloud/docs/c-0041] passed 👍
Description: Potential attackers may gain access to a POD and inherit access to the entire host network. For example, in AWS case, they will have access to the entire VPC. This control identifies all the PODs with host network access enabled.
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: Immutable container filesystem - https://hub.armo.cloud/docs/c-0017] failed 😥
Description: Mutable container filesystem can be abused to inject malicious code or data into containers. Use immutable (read-only) filesystem to limit potential attacks.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: Set the filesystem of the container to read-only when possible (POD securityContext, readOnlyRootFilesystem: true). If containers application needs to write into the filesystem, it is recommended to mount secondary filesystems for specific directories where application require write access.

[control: Ingress and Egress blocked - https://hub.armo.cloud/docs/c-0030] failed 😥
Description: Disable Ingress and Egress traffic on all pods wherever possible. It is recommended to define restrictive network policy on all new PODs, and then enable sources/destinations that this POD must communicate with.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: Define a network policy that restricts ingress and egress connections.

[control: Insecure capabilities - https://hub.armo.cloud/docs/c-0046] passed 👍
Description: Giving insecure or excsessive capabilities to a container can increase the impact of the container compromise. This control identifies all the PODs with dangerous capabilities (see documentation pages for details).
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: Linux hardening - https://hub.armo.cloud/docs/c-0055] failed 😥
Description: Containers may be given more privileges than they actually need. This can increase the potential impact of a container compromise.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: You can use AppArmor, Seccomp, SELinux and Linux Capabilities mechanisms to restrict containers abilities to utilize unwanted privileges.

[control: Non-root containers - https://hub.armo.cloud/docs/c-0013] failed 😥
Description: Potential attackers may gain access to a container and leverage its existing privileges to conduct an attack. Therefore, it is not recommended to deploy containers with root privileges unless it is absolutely necessary. This control identifies all the Pods running as root or can escalate to root.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: If your application does not need root privileges, make sure to define the runAsUser or runAsGroup under the PodSecurityContext and use user ID 1000 or higher. Do not turn on allowPrivlegeEscalation bit and make sure runAsNonRoot is true.

[control: PSP enabled - https://hub.armo.cloud/docs/c-0068] skipped 😕
Description: PSP enable fine-grained authorization of pod creation and it is important to enable it
[control: Privileged container - https://hub.armo.cloud/docs/c-0057] passed 👍
Description: Potential attackers may gain access to privileged containers and inherit access to the host resources. Therefore, it is not recommended to deploy privileged containers unless it is absolutely necessary. This control identifies all the privileged Pods.
Summary - Passed:1   Excluded:0   Failed:0   Total:1

[control: Resource policies - https://hub.armo.cloud/docs/c-0009] failed 😥
Description: CPU and memory resources should have a limit set for every container or a namespace to prevent resource exhaustion. This control identifies all the Pods without resource limit definitions by checking their yaml definition file as well as their namespace LimitRange objects. It is also recommended to use ResourceQuota object to restrict overall namespace resources, but this is not verified by this control.
Failed:
   Namescape default
      Job - kubescape-scan 
Summary - Passed:0   Excluded:0   Failed:1   Total:1
Remediation: Define LimitRange and Resource Limits in the namespace or in the deployment/POD yamls.

[control: Secret/ETCD encryption enabled - https://hub.armo.cloud/docs/c-0066] skipped 😕
Description: All Kubernetes Secrets are stored primarily in etcd therefore it is important to encrypt it.
FRAMEWORK NSA


You can see the results in a user-friendly UI, choose your preferred compliance framework, check risk results history and trends, manage exceptions, get remediation recommendations and much more by registering here: https://portal.armo.cloud/cli-signup 

+-----------------------------------------------------------------------+------------------+--------------------+---------------+--------------+
|                             CONTROL NAME                              | FAILED RESOURCES | EXCLUDED RESOURCES | ALL RESOURCES | % RISK-SCORE |
+-----------------------------------------------------------------------+------------------+--------------------+---------------+--------------+
| Allow privilege escalation                                            |        1         |         0          |       1       |     100%     |
| Allowed hostPath                                                      |        0         |         0          |       1       |      0%      |
| Applications credentials in configuration files                       |        0         |         0          |       3       |      0%      |
| Audit logs enabled                                                    |        0         |         0          |       0       |   skipped    |
| Automatic mapping of service account                                  |        4         |         0          |       4       |     100%     |
... skipping 130 lines ...
Sep  8 14:16:29.282: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-e2e-ct0cm6-vmss-mp-0

Sep  8 14:16:29.629: INFO: Collecting logs for Linux node win-p-win000000 in cluster capz-e2e-ct0cm6-vmss in namespace capz-e2e-ct0cm6

Sep  8 14:19:02.396: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-e2e-ct0cm6-vmss-mp-0

Failed to get logs for machine pool capz-e2e-ct0cm6-vmss-mp-0, cluster capz-e2e-ct0cm6/capz-e2e-ct0cm6-vmss: [[running command "cat /var/log/cloud-init.log": Process exited with status 1, running command "cat /var/log/cloud-init-output.log": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u kubelet.service": Process exited with status 1, running command "journalctl --no-pager --output=short-precise": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -k": Process exited with status 1, running command "journalctl --no-pager --output=short-precise -u containerd.service": Process exited with status 1], Unable to collect VMSS Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="NotFound" Message="The entity was not found in this Azure location.", dialing from control plane to target node at win-p-win000000: ssh: rejected: connect failed (Temporary failure in name resolution)]
Sep  8 14:19:03.875: INFO: Collecting logs for Windows node win-p-win000000 in cluster capz-e2e-ct0cm6-vmss in namespace capz-e2e-ct0cm6

Sep  8 14:22:21.283: INFO: Collecting boot logs for VMSS instance 0 of scale set win-p-win

Sep  8 14:22:21.876: INFO: Collecting logs for Windows node win-p-win000001 in cluster capz-e2e-ct0cm6-vmss in namespace capz-e2e-ct0cm6

Sep  8 14:22:51.813: INFO: Collecting boot logs for VMSS instance 1 of scale set win-p-win

Failed to get logs for machine pool capz-e2e-ct0cm6-vmss-mp-win, cluster capz-e2e-ct0cm6/capz-e2e-ct0cm6-vmss: [dialing from control plane to target node at win-p-win000000: ssh: rejected: connect failed (Temporary failure in name resolution), Unable to collect VMSS Boot Diagnostic logs: failed to get boot diagnostics data: compute.VirtualMachineScaleSetVMsClient#RetrieveBootDiagnosticsData: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="NotFound" Message="The entity was not found in this Azure location."]
STEP: Dumping workload cluster capz-e2e-ct0cm6/capz-e2e-ct0cm6-vmss kube-system pod logs
STEP: Fetching kube-system pod logs took 1.143173966s
STEP: Creating log watcher for controller kube-system/calico-node-kmf95, container calico-node
STEP: Collecting events for Pod kube-system/etcd-capz-e2e-ct0cm6-vmss-control-plane-f7b9k
STEP: Creating log watcher for controller kube-system/kube-proxy-vmqcq, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-xp724
... skipping 19 lines ...
STEP: Fetching activity logs took 2.339390089s
STEP: Dumping all the Cluster API resources in the "capz-e2e-ct0cm6" namespace
STEP: Deleting all clusters in the capz-e2e-ct0cm6 namespace
STEP: Deleting cluster capz-e2e-ct0cm6-vmss
INFO: Waiting for the Cluster capz-e2e-ct0cm6/capz-e2e-ct0cm6-vmss to be deleted
STEP: Waiting for cluster capz-e2e-ct0cm6-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-mmc2p, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-ct0cm6-vmss-control-plane-f7b9k, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-ct0cm6-vmss-control-plane-f7b9k, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-vmqcq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-ct0cm6-vmss-control-plane-f7b9k, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-ct0cm6-vmss-control-plane-f7b9k, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-969cf87c4-jlp5z, container calico-kube-controllers: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-kmf95, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-ccjss, container coredns: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-ct0cm6
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes" ran for 53m36s on Ginkgo node 1 of 3

... skipping 7 lines ...
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:310
------------------------------
STEP: Tearing down the management cluster


Ran 4 of 23 Specs in 4712.986 seconds
SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 19 Skipped


Ginkgo ran 1 suite in 1h20m15.802549366s
Test Suite Passed
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
================ REDACTING LOGS ================
... skipping 10 lines ...