Result | FAILURE |
Tests | 1 failed / 2 succeeded |
Started | |
Elapsed | 40m6s |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Timed out after 900.001s. Deployment kube-system/csi-azuredisk-controller failed Deployment: { "metadata": { "name": "csi-azuredisk-controller", "namespace": "kube-system", "uid": "aa520115-fce5-4b8a-83d0-97d61c80deeb", "resourceVersion": "2656", "generation": 1, "creationTimestamp": "2023-01-30T10:12:19Z", "labels": { "app.kubernetes.io/instance": "azuredisk-csi-driver-oot", "app.kubernetes.io/managed-by": "Helm", "app.kubernetes.io/name": "azuredisk-csi-driver", "app.kubernetes.io/version": "v1.26.2", "helm.sh/chart": "azuredisk-csi-driver-v1.26.2" }, "annotations": { "deployment.kubernetes.io/revision": "1", "meta.helm.sh/release-name": "azuredisk-csi-driver-oot", "meta.helm.sh/release-namespace": "kube-system" }, "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "apps/v1", "time": "2023-01-30T10:12:19Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:meta.helm.sh/release-name": {}, "f:meta.helm.sh/release-namespace": {} }, "f:labels": { ".": {}, "f:app.kubernetes.io/instance": {}, "f:app.kubernetes.io/managed-by": {}, "f:app.kubernetes.io/name": {}, "f:app.kubernetes.io/version": {}, "f:helm.sh/chart": {} } }, "f:spec": { "f:progressDeadlineSeconds": {}, "f:replicas": {}, "f:revisionHistoryLimit": {}, "f:selector": {}, "f:strategy": { "f:rollingUpdate": { ".": {}, "f:maxSurge": {}, "f:maxUnavailable": {} }, "f:type": {} }, "f:template": { "f:metadata": { "f:labels": { ".": {}, "f:app": {}, "f:app.kubernetes.io/instance": {}, "f:app.kubernetes.io/managed-by": {}, "f:app.kubernetes.io/name": {}, "f:app.kubernetes.io/version": {}, "f:helm.sh/chart": {} } }, "f:spec": { "f:containers": { "k:{\"name\":\"azuredisk\"}": { ".": {}, "f:args": {}, "f:env": { ".": {}, "k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}": { ".": {}, "f:name": {}, "f:valueFrom": { ".": {}, "f:configMapKeyRef": {} } }, "k:{\"name\":\"AZURE_GO_SDK_LOG_LEVEL\"}": { ".": {}, "f:name": {} }, "k:{\"name\":\"CSI_ENDPOINT\"}": { ".": {}, "f:name": {}, "f:value": {} } }, "f:image": {}, "f:imagePullPolicy": {}, "f:livenessProbe": { ".": {}, "f:failureThreshold": {}, "f:httpGet": { ".": {}, "f:path": {}, "f:port": {}, "f:scheme": {} }, "f:initialDelaySeconds": {}, "f:periodSeconds": {}, "f:successThreshold": {}, "f:timeoutSeconds": {} }, "f:name": {}, "f:ports": { ".": {}, "k:{\"containerPort\":29602,\"protocol\":\"TCP\"}": { ".": {}, "f:containerPort": {}, "f:hostPort": {}, "f:name": {}, "f:protocol": {} }, "k:{\"containerPort\":29604,\"protocol\":\"TCP\"}": { ".": {}, "f:containerPort": {}, "f:hostPort": {}, "f:name": {}, "f:protocol": {} } }, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} }, "k:{\"mountPath\":\"/etc/kubernetes/\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } }, "k:{\"name\":\"csi-attacher\"}": { ".": {}, "f:args": {}, "f:env": { ".": {}, "k:{\"name\":\"ADDRESS\"}": { ".": {}, "f:name": {}, "f:value": {} } }, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } }, "k:{\"name\":\"csi-provisioner\"}": { ".": {}, "f:args": {}, "f:env": { ".": {}, "k:{\"name\":\"ADDRESS\"}": { ".": {}, "f:name": {}, "f:value": {} } }, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } }, "k:{\"name\":\"csi-resizer\"}": { ".": {}, "f:args": {}, "f:env": { ".": {}, "k:{\"name\":\"ADDRESS\"}": { ".": {}, "f:name": {}, "f:value": {} } }, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } }, "k:{\"name\":\"csi-snapshotter\"}": { ".": {}, "f:args": {}, "f:env": { ".": {}, "k:{\"name\":\"ADDRESS\"}": { ".": {}, "f:name": {}, "f:value": {} } }, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } }, "k:{\"name\":\"liveness-probe\"}": { ".": {}, "f:args": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } } }, "f:dnsPolicy": {}, "f:hostNetwork": {}, "f:nodeSelector": {}, "f:priorityClassName": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:serviceAccount": {}, "f:serviceAccountName": {}, "f:terminationGracePeriodSeconds": {}, "f:tolerations": {}, "f:volumes": { ".": {}, "k:{\"name\":\"azure-cred\"}": { ".": {}, "f:hostPath": { ".": {}, "f:path": {}, "f:type": {} }, "f:name": {} }, "k:{\"name\":\"socket-dir\"}": { ".": {}, "f:emptyDir": {}, "f:name": {} } } } } } } }, { "manager": "kube-controller-manager", "operation": "Update", "apiVersion": "apps/v1", "time": "2023-01-30T10:12:19Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { "f:deployment.kubernetes.io/revision": {} } }, "f:status": { "f:conditions": { ".": {}, "k:{\"type\":\"Available\"}": { ".": {}, "f:lastTransitionTime": {}, "f:lastUpdateTime": {}, "f:message": {}, "f:reason": {}, "f:status": {}, "f:type": {} }, "k:{\"type\":\"Progressing\"}": { ".": {}, "f:lastTransitionTime": {}, "f:lastUpdateTime": {}, "f:message": {}, "f:reason": {}, "f:status": {}, "f:type": {} } }, "f:observedGeneration": {}, "f:replicas": {}, "f:unavailableReplicas": {}, "f:updatedReplicas": {} } }, "subresource": "status" } ] }, "spec": { "replicas": 1, "selector": { "matchLabels": { "app": "csi-azuredisk-controller" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "app": "csi-azuredisk-controller", "app.kubernetes.io/instance": "azuredisk-csi-driver-oot", "app.kubernetes.io/managed-by": "Helm", "app.kubernetes.io/name": "azuredisk-csi-driver", "app.kubernetes.io/version": "v1.26.2", "helm.sh/chart": "azuredisk-csi-driver-v1.26.2" } }, "spec": { "volumes": [ { "name": "socket-dir", "emptyDir": {} }, { "name": "azure-cred", "hostPath": { "path": "/etc/kubernetes/", "type": "DirectoryOrCreate" } } ], "containers": [ { "name": "csi-provisioner", "image": "mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0", "args": [ "--feature-gates=Topology=true", "--csi-address=$(ADDRESS)", "--v=2", "--timeout=30s", "--leader-election", "--leader-election-namespace=kube-system", "--worker-threads=100", "--extra-create-metadata=true", "--strict-topology=true", "--kube-api-qps=50", "--kube-api-burst=100" ], "env": [ { "name": "ADDRESS", "value": "/csi/csi.sock" } ], "resources": { "limits": { "memory": "500Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" }, { "name": "csi-attacher", "image": "mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0", "args": [ "-v=2", "-csi-address=$(ADDRESS)", "-timeout=1200s", "-leader-election", "--leader-election-namespace=kube-system", "-worker-threads=500", "-kube-api-qps=50", "-kube-api-burst=100" ], "env": [ { "name": "ADDRESS", "value": "/csi/csi.sock" } ], "resources": { "limits": { "memory": "500Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" }, { "name": "csi-snapshotter", "image": "mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", "args": [ "-csi-address=$(ADDRESS)", "-leader-election", "--leader-election-namespace=kube-system", "-v=2" ], "env": [ { "name": "ADDRESS", "value": "/csi/csi.sock" } ], "resources": { "limits": { "memory": "100Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" }, { "name": "csi-resizer", "image": "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0", "args": [ "-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s" ], "env": [ { "name": "ADDRESS", "value": "/csi/csi.sock" } ], "resources": { "limits": { "memory": "500Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" }, { "name": "liveness-probe", "image": "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0", "args": [ "--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29602", "--v=2" ], "resources": { "limits": { "memory": "100Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" }, { "name": "azuredisk", "image": "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.2", "args": [ "--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29604", "--disable-avset-nodes=false", "--vm-type=", "--drivername=disk.csi.azure.com", "--cloud-config-secret-name=azure-cloud-provider", "--cloud-config-secret-namespace=kube-system", "--custom-user-agent=", "--user-agent-suffix=OSS-helm", "--allow-empty-cloud-config=false", "--vmss-cache-ttl-seconds=-1" ], "ports": [ { "name": "healthz", "hostPort": 29602, "containerPort": 29602, "protocol": "TCP" }, { "name": "metrics", "hostPort": 29604, "containerPort": 29604, "protocol": "TCP" } ], "env": [ { "name": "AZURE_CREDENTIAL_FILE", "valueFrom": { "configMapKeyRef": { "name": "azure-cred-file", "key": "path", "optional": true } } }, { "name": "CSI_ENDPOINT", "value": "unix:///csi/csi.sock" }, { "name": "AZURE_GO_SDK_LOG_LEVEL" } ], "resources": { "limits": { "memory": "500Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" }, { "name": "azure-cred", "mountPath": "/etc/kubernetes/" } ], "livenessProbe": { "httpGet": { "path": "/healthz", "port": "healthz", "scheme": "HTTP" }, "initialDelaySeconds": 30, "timeoutSeconds": 10, "periodSeconds": 30, "successThreshold": 1, "failureThreshold": 5 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "nodeSelector": { "kubernetes.io/os": "linux", "node-role.kubernetes.io/control-plane": "" }, "serviceAccountName": "csi-azuredisk-controller-sa", "serviceAccount": "csi-azuredisk-controller-sa", "hostNetwork": true, "securityContext": {}, "schedulerName": "default-scheduler", "tolerations": [ { "key": "node-role.kubernetes.io/master", "operator": "Exists", "effect": "NoSchedule" }, { "key": "node-role.kubernetes.io/controlplane", "operator": "Exists", "effect": "NoSchedule" }, { "key": "node-role.kubernetes.io/control-plane", "operator": "Exists", "effect": "NoSchedule" } ], "priorityClassName": "system-cluster-critical" } }, "strategy": { "type": "RollingUpdate", "rollingUpdate": { "maxUnavailable": "25%", "maxSurge": "25%" } }, "revisionHistoryLimit": 10, "progressDeadlineSeconds": 600 }, "status": { "observedGeneration": 1, "replicas": 1, "updatedReplicas": 1, "unavailableReplicas": 1, "conditions": [ { "type": "Available", "status": "False", "lastUpdateTime": "2023-01-30T10:12:19Z", "lastTransitionTime": "2023-01-30T10:12:19Z", "reason": "MinimumReplicasUnavailable", "message": "Deployment does not have minimum availability." }, { "type": "Progressing", "status": "False", "lastUpdateTime": "2023-01-30T10:22:20Z", "lastTransitionTime": "2023-01-30T10:22:20Z", "reason": "ProgressDeadlineExceeded", "message": "ReplicaSet \"csi-azuredisk-controller-7996479b64\" has timed out progressing." } ] } } LAST SEEN TYPE REASON OBJECT MESSAGE 2023-01-30 10:12:19 +0000 UTC Normal ScalingReplicaSet deployment/csi-azuredisk-controller Scaled up replica set csi-azuredisk-controller-7996479b64 to 1 Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:122 @ 01/30/23 10:27:19.876from junit.e2e_suite.1.xml
> Enter [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/30/23 10:05:55.616 INFO: Cluster name is capz-conf-thzjal STEP: Creating namespace "capz-conf-thzjal" for hosting the cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/30/23 10:05:55.616 Jan 30 10:05:55.616: INFO: starting to create namespace for hosting the "capz-conf-thzjal" test spec INFO: Creating namespace capz-conf-thzjal INFO: Creating event watcher for namespace "capz-conf-thzjal" < Exit [BeforeEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 @ 01/30/23 10:05:55.664 (48ms) > Enter [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/30/23 10:05:55.664 conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/30/23 10:05:55.664 conformance-tests Name | N | Min | Median | Mean | StdDev | Max INFO: Creating the workload cluster with name "capz-conf-thzjal" using the "conformance-ci-artifacts" template (Kubernetes v1.24.11-rc.0.11+73da4d3652771d, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-thzjal --infrastructure (default) --kubernetes-version v1.24.11-rc.0.11+73da4d3652771d --control-plane-machine-count 1 --worker-machine-count 2 --flavor conformance-ci-artifacts INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/cluster_helpers.go:134 @ 01/30/23 10:05:58.81 INFO: Waiting for control plane to be initialized STEP: Installing Calico CNI via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:51 @ 01/30/23 10:07:48.907 STEP: Configuring calico CNI helm chart for IPv4 configuration - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:131 @ 01/30/23 10:07:48.908 Jan 30 10:10:29.163: INFO: getting history for release projectcalico Jan 30 10:10:29.226: INFO: Release projectcalico does not exist, installing it Jan 30 10:10:30.317: INFO: creating 1 resource(s) Jan 30 10:10:30.401: INFO: creating 1 resource(s) Jan 30 10:10:30.481: INFO: creating 1 resource(s) Jan 30 10:10:30.561: INFO: creating 1 resource(s) Jan 30 10:10:30.641: INFO: creating 1 resource(s) Jan 30 10:10:30.721: INFO: creating 1 resource(s) Jan 30 10:10:30.895: INFO: creating 1 resource(s) Jan 30 10:10:31.009: INFO: creating 1 resource(s) Jan 30 10:10:31.080: INFO: creating 1 resource(s) Jan 30 10:10:31.162: INFO: creating 1 resource(s) Jan 30 10:10:31.247: INFO: creating 1 resource(s) Jan 30 10:10:31.833: INFO: creating 1 resource(s) Jan 30 10:10:31.904: INFO: creating 1 resource(s) Jan 30 10:10:31.983: INFO: creating 1 resource(s) Jan 30 10:10:32.065: INFO: creating 1 resource(s) Jan 30 10:10:32.157: INFO: creating 1 resource(s) Jan 30 10:10:32.329: INFO: creating 1 resource(s) Jan 30 10:10:32.422: INFO: creating 1 resource(s) Jan 30 10:10:32.597: INFO: creating 1 resource(s) Jan 30 10:10:32.795: INFO: creating 1 resource(s) Jan 30 10:10:33.170: INFO: creating 1 resource(s) Jan 30 10:10:33.251: INFO: Clearing discovery cache Jan 30 10:10:33.251: INFO: beginning wait for 21 resources with timeout of 1m0s Jan 30 10:10:36.976: INFO: creating 1 resource(s) Jan 30 10:10:37.746: INFO: creating 6 resource(s) Jan 30 10:10:38.675: INFO: Install complete STEP: Waiting for Ready tigera-operator deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:60 @ 01/30/23 10:10:39.168 STEP: waiting for deployment tigera-operator/tigera-operator to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/30/23 10:10:39.611 Jan 30 10:10:39.611: INFO: starting to wait for deployment to become available Jan 30 10:10:49.734: INFO: Deployment tigera-operator/tigera-operator is now available, took 10.123034127s STEP: Waiting for Ready calico-system deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:74 @ 01/30/23 10:10:51.344 STEP: waiting for deployment calico-system/calico-kube-controllers to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/30/23 10:10:51.913 Jan 30 10:10:51.913: INFO: starting to wait for deployment to become available Jan 30 10:11:52.894: INFO: Deployment calico-system/calico-kube-controllers is now available, took 1m0.981035696s STEP: waiting for deployment calico-system/calico-typha to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/30/23 10:11:53.451 Jan 30 10:11:53.451: INFO: starting to wait for deployment to become available Jan 30 10:11:53.512: INFO: Deployment calico-system/calico-typha is now available, took 61.498194ms STEP: Waiting for Ready calico-apiserver deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:79 @ 01/30/23 10:11:53.512 STEP: waiting for deployment calico-apiserver/calico-apiserver to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/30/23 10:11:53.954 Jan 30 10:11:53.954: INFO: starting to wait for deployment to become available Jan 30 10:12:14.141: INFO: Deployment calico-apiserver/calico-apiserver is now available, took 20.18707943s STEP: Waiting for Ready calico-node daemonset pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:84 @ 01/30/23 10:12:14.141 STEP: waiting for daemonset calico-system/calico-node to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/30/23 10:12:14.451 Jan 30 10:12:14.452: INFO: waiting for daemonset calico-system/calico-node to be complete Jan 30 10:12:14.514: INFO: 1 daemonset calico-system/calico-node pods are running, took 62.584687ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:91 @ 01/30/23 10:12:14.514 STEP: waiting for daemonset calico-system/calico-node-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/30/23 10:12:14.821 Jan 30 10:12:14.821: INFO: waiting for daemonset calico-system/calico-node-windows to be complete Jan 30 10:12:14.882: INFO: 0 daemonset calico-system/calico-node-windows pods are running, took 60.920398ms STEP: Waiting for Ready calico windows pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cni.go:97 @ 01/30/23 10:12:14.882 STEP: waiting for daemonset kube-system/kube-proxy-windows to be complete - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/30/23 10:12:15.134 Jan 30 10:12:15.134: INFO: waiting for daemonset kube-system/kube-proxy-windows to be complete Jan 30 10:12:15.194: INFO: 0 daemonset kube-system/kube-proxy-windows pods are running, took 60.298977ms INFO: Waiting for the first control plane machine managed by capz-conf-thzjal/capz-conf-thzjal-control-plane to be provisioned STEP: Waiting for one control plane node to exist - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/controlplane_helpers.go:133 @ 01/30/23 10:12:15.231 STEP: Installing azure-disk CSI driver components via helm - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:71 @ 01/30/23 10:12:15.242 Jan 30 10:12:15.340: INFO: getting history for release azuredisk-csi-driver-oot Jan 30 10:12:15.406: INFO: Release azuredisk-csi-driver-oot does not exist, installing it Jan 30 10:12:18.717: INFO: creating 1 resource(s) Jan 30 10:12:18.987: INFO: creating 18 resource(s) Jan 30 10:12:19.516: INFO: Install complete STEP: Waiting for Ready csi-azuredisk-controller deployment pods - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/cloud-provider-azure.go:81 @ 01/30/23 10:12:19.547 STEP: waiting for deployment kube-system/csi-azuredisk-controller to be available - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/30/23 10:12:19.807 Jan 30 10:12:19.807: INFO: starting to wait for deployment to become available [FAILED] Timed out after 900.001s. Deployment kube-system/csi-azuredisk-controller failed Deployment: { "metadata": { "name": "csi-azuredisk-controller", "namespace": "kube-system", "uid": "aa520115-fce5-4b8a-83d0-97d61c80deeb", "resourceVersion": "2656", "generation": 1, "creationTimestamp": "2023-01-30T10:12:19Z", "labels": { "app.kubernetes.io/instance": "azuredisk-csi-driver-oot", "app.kubernetes.io/managed-by": "Helm", "app.kubernetes.io/name": "azuredisk-csi-driver", "app.kubernetes.io/version": "v1.26.2", "helm.sh/chart": "azuredisk-csi-driver-v1.26.2" }, "annotations": { "deployment.kubernetes.io/revision": "1", "meta.helm.sh/release-name": "azuredisk-csi-driver-oot", "meta.helm.sh/release-namespace": "kube-system" }, "managedFields": [ { "manager": "e2e.test", "operation": "Update", "apiVersion": "apps/v1", "time": "2023-01-30T10:12:19Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:meta.helm.sh/release-name": {}, "f:meta.helm.sh/release-namespace": {} }, "f:labels": { ".": {}, "f:app.kubernetes.io/instance": {}, "f:app.kubernetes.io/managed-by": {}, "f:app.kubernetes.io/name": {}, "f:app.kubernetes.io/version": {}, "f:helm.sh/chart": {} } }, "f:spec": { "f:progressDeadlineSeconds": {}, "f:replicas": {}, "f:revisionHistoryLimit": {}, "f:selector": {}, "f:strategy": { "f:rollingUpdate": { ".": {}, "f:maxSurge": {}, "f:maxUnavailable": {} }, "f:type": {} }, "f:template": { "f:metadata": { "f:labels": { ".": {}, "f:app": {}, "f:app.kubernetes.io/instance": {}, "f:app.kubernetes.io/managed-by": {}, "f:app.kubernetes.io/name": {}, "f:app.kubernetes.io/version": {}, "f:helm.sh/chart": {} } }, "f:spec": { "f:containers": { "k:{\"name\":\"azuredisk\"}": { ".": {}, "f:args": {}, "f:env": { ".": {}, "k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}": { ".": {}, "f:name": {}, "f:valueFrom": { ".": {}, "f:configMapKeyRef": {} } }, "k:{\"name\":\"AZURE_GO_SDK_LOG_LEVEL\"}": { ".": {}, "f:name": {} }, "k:{\"name\":\"CSI_ENDPOINT\"}": { ".": {}, "f:name": {}, "f:value": {} } }, "f:image": {}, "f:imagePullPolicy": {}, "f:livenessProbe": { ".": {}, "f:failureThreshold": {}, "f:httpGet": { ".": {}, "f:path": {}, "f:port": {}, "f:scheme": {} }, "f:initialDelaySeconds": {}, "f:periodSeconds": {}, "f:successThreshold": {}, "f:timeoutSeconds": {} }, "f:name": {}, "f:ports": { ".": {}, "k:{\"containerPort\":29602,\"protocol\":\"TCP\"}": { ".": {}, "f:containerPort": {}, "f:hostPort": {}, "f:name": {}, "f:protocol": {} }, "k:{\"containerPort\":29604,\"protocol\":\"TCP\"}": { ".": {}, "f:containerPort": {}, "f:hostPort": {}, "f:name": {}, "f:protocol": {} } }, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} }, "k:{\"mountPath\":\"/etc/kubernetes/\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } }, "k:{\"name\":\"csi-attacher\"}": { ".": {}, "f:args": {}, "f:env": { ".": {}, "k:{\"name\":\"ADDRESS\"}": { ".": {}, "f:name": {}, "f:value": {} } }, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } }, "k:{\"name\":\"csi-provisioner\"}": { ".": {}, "f:args": {}, "f:env": { ".": {}, "k:{\"name\":\"ADDRESS\"}": { ".": {}, "f:name": {}, "f:value": {} } }, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } }, "k:{\"name\":\"csi-resizer\"}": { ".": {}, "f:args": {}, "f:env": { ".": {}, "k:{\"name\":\"ADDRESS\"}": { ".": {}, "f:name": {}, "f:value": {} } }, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } }, "k:{\"name\":\"csi-snapshotter\"}": { ".": {}, "f:args": {}, "f:env": { ".": {}, "k:{\"name\":\"ADDRESS\"}": { ".": {}, "f:name": {}, "f:value": {} } }, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } }, "k:{\"name\":\"liveness-probe\"}": { ".": {}, "f:args": {}, "f:image": {}, "f:imagePullPolicy": {}, "f:name": {}, "f:resources": { ".": {}, "f:limits": { ".": {}, "f:memory": {} }, "f:requests": { ".": {}, "f:cpu": {}, "f:memory": {} } }, "f:terminationMessagePath": {}, "f:terminationMessagePolicy": {}, "f:volumeMounts": { ".": {}, "k:{\"mountPath\":\"/csi\"}": { ".": {}, "f:mountPath": {}, "f:name": {} } } } }, "f:dnsPolicy": {}, "f:hostNetwork": {}, "f:nodeSelector": {}, "f:priorityClassName": {}, "f:restartPolicy": {}, "f:schedulerName": {}, "f:securityContext": {}, "f:serviceAccount": {}, "f:serviceAccountName": {}, "f:terminationGracePeriodSeconds": {}, "f:tolerations": {}, "f:volumes": { ".": {}, "k:{\"name\":\"azure-cred\"}": { ".": {}, "f:hostPath": { ".": {}, "f:path": {}, "f:type": {} }, "f:name": {} }, "k:{\"name\":\"socket-dir\"}": { ".": {}, "f:emptyDir": {}, "f:name": {} } } } } } } }, { "manager": "kube-controller-manager", "operation": "Update", "apiVersion": "apps/v1", "time": "2023-01-30T10:12:19Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { "f:deployment.kubernetes.io/revision": {} } }, "f:status": { "f:conditions": { ".": {}, "k:{\"type\":\"Available\"}": { ".": {}, "f:lastTransitionTime": {}, "f:lastUpdateTime": {}, "f:message": {}, "f:reason": {}, "f:status": {}, "f:type": {} }, "k:{\"type\":\"Progressing\"}": { ".": {}, "f:lastTransitionTime": {}, "f:lastUpdateTime": {}, "f:message": {}, "f:reason": {}, "f:status": {}, "f:type": {} } }, "f:observedGeneration": {}, "f:replicas": {}, "f:unavailableReplicas": {}, "f:updatedReplicas": {} } }, "subresource": "status" } ] }, "spec": { "replicas": 1, "selector": { "matchLabels": { "app": "csi-azuredisk-controller" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "app": "csi-azuredisk-controller", "app.kubernetes.io/instance": "azuredisk-csi-driver-oot", "app.kubernetes.io/managed-by": "Helm", "app.kubernetes.io/name": "azuredisk-csi-driver", "app.kubernetes.io/version": "v1.26.2", "helm.sh/chart": "azuredisk-csi-driver-v1.26.2" } }, "spec": { "volumes": [ { "name": "socket-dir", "emptyDir": {} }, { "name": "azure-cred", "hostPath": { "path": "/etc/kubernetes/", "type": "DirectoryOrCreate" } } ], "containers": [ { "name": "csi-provisioner", "image": "mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.3.0", "args": [ "--feature-gates=Topology=true", "--csi-address=$(ADDRESS)", "--v=2", "--timeout=30s", "--leader-election", "--leader-election-namespace=kube-system", "--worker-threads=100", "--extra-create-metadata=true", "--strict-topology=true", "--kube-api-qps=50", "--kube-api-burst=100" ], "env": [ { "name": "ADDRESS", "value": "/csi/csi.sock" } ], "resources": { "limits": { "memory": "500Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" }, { "name": "csi-attacher", "image": "mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.0.0", "args": [ "-v=2", "-csi-address=$(ADDRESS)", "-timeout=1200s", "-leader-election", "--leader-election-namespace=kube-system", "-worker-threads=500", "-kube-api-qps=50", "-kube-api-burst=100" ], "env": [ { "name": "ADDRESS", "value": "/csi/csi.sock" } ], "resources": { "limits": { "memory": "500Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" }, { "name": "csi-snapshotter", "image": "mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", "args": [ "-csi-address=$(ADDRESS)", "-leader-election", "--leader-election-namespace=kube-system", "-v=2" ], "env": [ { "name": "ADDRESS", "value": "/csi/csi.sock" } ], "resources": { "limits": { "memory": "100Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" }, { "name": "csi-resizer", "image": "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0", "args": [ "-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s" ], "env": [ { "name": "ADDRESS", "value": "/csi/csi.sock" } ], "resources": { "limits": { "memory": "500Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" }, { "name": "liveness-probe", "image": "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.8.0", "args": [ "--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29602", "--v=2" ], "resources": { "limits": { "memory": "100Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" }, { "name": "azuredisk", "image": "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.2", "args": [ "--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29604", "--disable-avset-nodes=false", "--vm-type=", "--drivername=disk.csi.azure.com", "--cloud-config-secret-name=azure-cloud-provider", "--cloud-config-secret-namespace=kube-system", "--custom-user-agent=", "--user-agent-suffix=OSS-helm", "--allow-empty-cloud-config=false", "--vmss-cache-ttl-seconds=-1" ], "ports": [ { "name": "healthz", "hostPort": 29602, "containerPort": 29602, "protocol": "TCP" }, { "name": "metrics", "hostPort": 29604, "containerPort": 29604, "protocol": "TCP" } ], "env": [ { "name": "AZURE_CREDENTIAL_FILE", "valueFrom": { "configMapKeyRef": { "name": "azure-cred-file", "key": "path", "optional": true } } }, { "name": "CSI_ENDPOINT", "value": "unix:///csi/csi.sock" }, { "name": "AZURE_GO_SDK_LOG_LEVEL" } ], "resources": { "limits": { "memory": "500Mi" }, "requests": { "cpu": "10m", "memory": "20Mi" } }, "volumeMounts": [ { "name": "socket-dir", "mountPath": "/csi" }, { "name": "azure-cred", "mountPath": "/etc/kubernetes/" } ], "livenessProbe": { "httpGet": { "path": "/healthz", "port": "healthz", "scheme": "HTTP" }, "initialDelaySeconds": 30, "timeoutSeconds": 10, "periodSeconds": 30, "successThreshold": 1, "failureThreshold": 5 }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "nodeSelector": { "kubernetes.io/os": "linux", "node-role.kubernetes.io/control-plane": "" }, "serviceAccountName": "csi-azuredisk-controller-sa", "serviceAccount": "csi-azuredisk-controller-sa", "hostNetwork": true, "securityContext": {}, "schedulerName": "default-scheduler", "tolerations": [ { "key": "node-role.kubernetes.io/master", "operator": "Exists", "effect": "NoSchedule" }, { "key": "node-role.kubernetes.io/controlplane", "operator": "Exists", "effect": "NoSchedule" }, { "key": "node-role.kubernetes.io/control-plane", "operator": "Exists", "effect": "NoSchedule" } ], "priorityClassName": "system-cluster-critical" } }, "strategy": { "type": "RollingUpdate", "rollingUpdate": { "maxUnavailable": "25%", "maxSurge": "25%" } }, "revisionHistoryLimit": 10, "progressDeadlineSeconds": 600 }, "status": { "observedGeneration": 1, "replicas": 1, "updatedReplicas": 1, "unavailableReplicas": 1, "conditions": [ { "type": "Available", "status": "False", "lastUpdateTime": "2023-01-30T10:12:19Z", "lastTransitionTime": "2023-01-30T10:12:19Z", "reason": "MinimumReplicasUnavailable", "message": "Deployment does not have minimum availability." }, { "type": "Progressing", "status": "False", "lastUpdateTime": "2023-01-30T10:22:20Z", "lastTransitionTime": "2023-01-30T10:22:20Z", "reason": "ProgressDeadlineExceeded", "message": "ReplicaSet \"csi-azuredisk-controller-7996479b64\" has timed out progressing." } ] } } LAST SEEN TYPE REASON OBJECT MESSAGE 2023-01-30 10:12:19 +0000 UTC Normal ScalingReplicaSet deployment/csi-azuredisk-controller Scaled up replica set csi-azuredisk-controller-7996479b64 to 1 Expected <bool>: false to be true In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:122 @ 01/30/23 10:27:19.876 < Exit [It] conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 @ 01/30/23 10:27:19.877 (21m24.212s) > Enter [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:242 @ 01/30/23 10:27:19.877 Jan 30 10:27:19.877: INFO: FAILED! Jan 30 10:27:19.877: INFO: Cleaning up after "Conformance Tests conformance-tests" spec STEP: Dumping logs from the "capz-conf-thzjal" workload cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:88 @ 01/30/23 10:27:19.877 Jan 30 10:27:19.877: INFO: Dumping workload cluster capz-conf-thzjal/capz-conf-thzjal logs Jan 30 10:27:19.925: INFO: Collecting logs for Linux node capz-conf-thzjal-control-plane-sdcsl in cluster capz-conf-thzjal in namespace capz-conf-thzjal Jan 30 10:27:32.244: INFO: Collecting boot logs for AzureMachine capz-conf-thzjal-control-plane-sdcsl Jan 30 10:27:33.433: INFO: Collecting logs for Linux node capz-conf-thzjal-md-0-sz5vw in cluster capz-conf-thzjal in namespace capz-conf-thzjal Jan 30 10:27:43.900: INFO: Collecting boot logs for AzureMachine capz-conf-thzjal-md-0-sz5vw Jan 30 10:27:44.429: INFO: Collecting logs for Linux node capz-conf-thzjal-md-0-sktl8 in cluster capz-conf-thzjal in namespace capz-conf-thzjal Jan 30 10:27:52.826: INFO: Collecting boot logs for AzureMachine capz-conf-thzjal-md-0-sktl8 Jan 30 10:27:53.492: INFO: Dumping workload cluster capz-conf-thzjal/capz-conf-thzjal kube-system pod logs Jan 30 10:27:54.068: INFO: Describing Pod calico-apiserver/calico-apiserver-5cc58b87f8-czgwx Jan 30 10:27:54.068: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-5cc58b87f8-czgwx, container calico-apiserver Jan 30 10:27:54.190: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-5cc58b87f8-x58hq, container calico-apiserver Jan 30 10:27:54.190: INFO: Describing Pod calico-apiserver/calico-apiserver-5cc58b87f8-x58hq Jan 30 10:27:54.313: INFO: Creating log watcher for controller calico-system/calico-kube-controllers-594d54f99-wltjv, container calico-kube-controllers Jan 30 10:27:54.314: INFO: Describing Pod calico-system/calico-kube-controllers-594d54f99-wltjv Jan 30 10:27:54.438: INFO: Creating log watcher for controller calico-system/calico-node-6k9pn, container calico-node Jan 30 10:27:54.438: INFO: Describing Pod calico-system/calico-node-6k9pn Jan 30 10:27:54.563: INFO: Creating log watcher for controller calico-system/calico-node-q7jzl, container calico-node Jan 30 10:27:54.563: INFO: Describing Pod calico-system/calico-node-q7jzl Jan 30 10:27:54.688: INFO: Creating log watcher for controller calico-system/calico-node-sqp6v, container calico-node Jan 30 10:27:54.688: INFO: Describing Pod calico-system/calico-node-sqp6v Jan 30 10:27:54.811: INFO: Creating log watcher for controller calico-system/calico-typha-765bfbdd75-92qrc, container calico-typha Jan 30 10:27:54.811: INFO: Describing Pod calico-system/calico-typha-765bfbdd75-92qrc Jan 30 10:27:55.207: INFO: Describing Pod calico-system/calico-typha-765bfbdd75-dvzmr Jan 30 10:27:55.207: INFO: Creating log watcher for controller calico-system/calico-typha-765bfbdd75-dvzmr, container calico-typha Jan 30 10:27:55.607: INFO: Describing Pod calico-system/csi-node-driver-pm6rd Jan 30 10:27:55.607: INFO: Creating log watcher for controller calico-system/csi-node-driver-pm6rd, container calico-csi Jan 30 10:27:55.607: INFO: Creating log watcher for controller calico-system/csi-node-driver-pm6rd, container csi-node-driver-registrar Jan 30 10:27:56.008: INFO: Creating log watcher for controller calico-system/csi-node-driver-zcqpw, container calico-csi Jan 30 10:27:56.008: INFO: Creating log watcher for controller calico-system/csi-node-driver-zcqpw, container csi-node-driver-registrar Jan 30 10:27:56.008: INFO: Describing Pod calico-system/csi-node-driver-zcqpw Jan 30 10:27:56.410: INFO: Creating log watcher for controller calico-system/csi-node-driver-zvn9l, container calico-csi Jan 30 10:27:56.410: INFO: Describing Pod calico-system/csi-node-driver-zvn9l Jan 30 10:27:56.410: INFO: Creating log watcher for controller calico-system/csi-node-driver-zvn9l, container csi-node-driver-registrar Jan 30 10:27:56.808: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-cjsfp, container coredns Jan 30 10:27:56.808: INFO: Describing Pod kube-system/coredns-57575c5f89-cjsfp Jan 30 10:27:57.208: INFO: Creating log watcher for controller kube-system/coredns-57575c5f89-hdlwm, container coredns Jan 30 10:27:57.208: INFO: Describing Pod kube-system/coredns-57575c5f89-hdlwm Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container csi-provisioner Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container csi-resizer Jan 30 10:27:57.615: INFO: Describing Pod kube-system/csi-azuredisk-controller-7996479b64-hsfmv Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container csi-attacher Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container azuredisk Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container liveness-probe Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container csi-snapshotter Jan 30 10:27:57.699: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container azuredisk: container "azuredisk" in pod "csi-azuredisk-controller-7996479b64-hsfmv" is waiting to start: trying and failing to pull image Jan 30 10:27:58.015: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-56n7d, container liveness-probe Jan 30 10:27:58.015: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-56n7d, container node-driver-registrar Jan 30 10:27:58.015: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-56n7d, container azuredisk Jan 30 10:27:58.015: INFO: Describing Pod kube-system/csi-azuredisk-node-56n7d Jan 30 10:27:58.091: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-56n7d, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-56n7d" is waiting to start: trying and failing to pull image Jan 30 10:27:58.409: INFO: Describing Pod kube-system/csi-azuredisk-node-lmbxx Jan 30 10:27:58.409: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-lmbxx, container liveness-probe Jan 30 10:27:58.409: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-lmbxx, container node-driver-registrar Jan 30 10:27:58.409: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-lmbxx, container azuredisk Jan 30 10:27:58.483: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-lmbxx, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-lmbxx" is waiting to start: trying and failing to pull image Jan 30 10:27:58.809: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sjk2k, container node-driver-registrar Jan 30 10:27:58.809: INFO: Describing Pod kube-system/csi-azuredisk-node-sjk2k Jan 30 10:27:58.810: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sjk2k, container azuredisk Jan 30 10:27:58.810: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sjk2k, container liveness-probe Jan 30 10:27:58.882: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-sjk2k, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-sjk2k" is waiting to start: trying and failing to pull image Jan 30 10:27:59.208: INFO: Describing Pod kube-system/etcd-capz-conf-thzjal-control-plane-sdcsl Jan 30 10:27:59.208: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-thzjal-control-plane-sdcsl, container etcd Jan 30 10:27:59.606: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-thzjal-control-plane-sdcsl, container kube-apiserver Jan 30 10:27:59.606: INFO: Describing Pod kube-system/kube-apiserver-capz-conf-thzjal-control-plane-sdcsl Jan 30 10:28:00.007: INFO: Describing Pod kube-system/kube-controller-manager-capz-conf-thzjal-control-plane-sdcsl Jan 30 10:28:00.007: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-thzjal-control-plane-sdcsl, container kube-controller-manager Jan 30 10:28:00.408: INFO: Describing Pod kube-system/kube-proxy-49lvg Jan 30 10:28:00.408: INFO: Creating log watcher for controller kube-system/kube-proxy-49lvg, container kube-proxy Jan 30 10:28:00.806: INFO: Creating log watcher for controller kube-system/kube-proxy-hj2mx, container kube-proxy Jan 30 10:28:00.806: INFO: Describing Pod kube-system/kube-proxy-hj2mx Jan 30 10:28:01.210: INFO: Creating log watcher for controller kube-system/kube-proxy-hnwvb, container kube-proxy Jan 30 10:28:01.210: INFO: Describing Pod kube-system/kube-proxy-hnwvb Jan 30 10:28:01.607: INFO: Creating log watcher for controller kube-system/kube-scheduler-capz-conf-thzjal-control-plane-sdcsl, container kube-scheduler Jan 30 10:28:01.607: INFO: Describing Pod kube-system/kube-scheduler-capz-conf-thzjal-control-plane-sdcsl Jan 30 10:28:02.008: INFO: Creating log watcher for controller kube-system/metrics-server-7d674f87b8-zsrxg, container metrics-server Jan 30 10:28:02.008: INFO: Describing Pod kube-system/metrics-server-7d674f87b8-zsrxg Jan 30 10:28:02.406: INFO: Fetching kube-system pod logs took 8.914015376s Jan 30 10:28:02.406: INFO: Dumping workload cluster capz-conf-thzjal/capz-conf-thzjal Azure activity log Jan 30 10:28:02.406: INFO: Creating log watcher for controller tigera-operator/tigera-operator-65d6bf4d4f-v5scn, container tigera-operator Jan 30 10:28:02.406: INFO: Describing Pod tigera-operator/tigera-operator-65d6bf4d4f-v5scn Jan 30 10:28:06.854: INFO: Fetching activity logs took 4.44817392s Jan 30 10:28:06.854: INFO: Dumping all the Cluster API resources in the "capz-conf-thzjal" namespace Jan 30 10:28:07.399: INFO: Deleting all clusters in the capz-conf-thzjal namespace STEP: Deleting cluster capz-conf-thzjal - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/30/23 10:28:07.424 INFO: Waiting for the Cluster capz-conf-thzjal/capz-conf-thzjal to be deleted STEP: Waiting for cluster capz-conf-thzjal to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/30/23 10:28:07.444 Jan 30 10:33:47.647: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-thzjal Jan 30 10:33:47.669: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" STEP: Redacting sensitive information from logs - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:212 @ 01/30/23 10:33:48.48 < Exit [AfterEach] Conformance Tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:242 @ 01/30/23 10:34:03.491 (6m43.614s)
Filter through log files | View test history on testgrid
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running KCP upgrade in a HA cluster using scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Running the MachineDeployment rollout spec Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
capz-e2e [It] Running the Cluster API E2E tests Running the quick-start spec Should create a workload cluster
capz-e2e [It] Running the Cluster API E2E tests Running the workload cluster upgrade spec [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capz-e2e [It] Running the Cluster API E2E tests Should successfully exercise machine pools Should successfully create a cluster with machine pool machines
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger KCP remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully remediate unhealthy machines with MachineHealthCheck Should successfully trigger machine deployment remediation
capz-e2e [It] Running the Cluster API E2E tests Should successfully scale out and scale in a MachineDeployment Should successfully scale a MachineDeployment up and down upon changes to the MachineDeployment replica count
capz-e2e [It] Running the Cluster API E2E tests Should successfully set and use node drain timeout A node should be forcefully removed if it cannot be drained in time
capz-e2e [It] Workload cluster creation Creating a GPU-enabled cluster [OPTIONAL] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating a VMSS cluster [REQUIRED] with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and external azurediskcsi driver [OPTIONAL] with a 1 control plane nodes and 2 worker nodes
capz-e2e [It] Workload cluster creation Creating a cluster that uses the external cloud provider and machinepools [OPTIONAL] with 1 control plane node and 1 machinepool
capz-e2e [It] Workload cluster creation Creating a dual-stack cluster [OPTIONAL] With dual-stack worker node
capz-e2e [It] Workload cluster creation Creating a highly available cluster [REQUIRED] With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
capz-e2e [It] Workload cluster creation Creating a ipv6 control-plane cluster [REQUIRED] With ipv6 worker node
capz-e2e [It] Workload cluster creation Creating an AKS cluster [EXPERIMENTAL][Managed Kubernetes] with a single control plane node and 1 node
capz-e2e [It] Workload cluster creation Creating clusters using clusterclass [OPTIONAL] with a single control plane node, one linux worker node, and one windows worker node
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=external AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with out-of-tree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=external CCM=internal AzureDiskCSIMigration=true: upgrade to v1.23 should create volumes dynamically with intree cloud provider
capz-e2e [It] [K8s-Upgrade] Running the CSI migration tests [CSI Migration] Running CSI migration test CSI=internal CCM=internal AzureDiskCSIMigration=false: upgrade to v1.23 should create volumes dynamically with intree cloud provider
... skipping 484 lines ... [38;5;243m------------------------------[0m [0mConformance Tests [0m[1mconformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100[0m INFO: Cluster name is capz-conf-thzjal [1mSTEP:[0m Creating namespace "capz-conf-thzjal" for hosting the cluster [38;5;243m@ 01/30/23 10:05:55.616[0m Jan 30 10:05:55.616: INFO: starting to create namespace for hosting the "capz-conf-thzjal" test spec 2023/01/30 10:05:55 failed trying to get namespace (capz-conf-thzjal):namespaces "capz-conf-thzjal" not found INFO: Creating namespace capz-conf-thzjal INFO: Creating event watcher for namespace "capz-conf-thzjal" [1mconformance-tests[38;5;243m - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/30/23 10:05:55.664[0m [1mconformance-tests [0m[1mName[0m | [1mN[0m | [1mMin[0m | [1mMedian[0m | [1mMean[0m | [1mStdDev[0m | [1mMax[0m INFO: Creating the workload cluster with name "capz-conf-thzjal" using the "conformance-ci-artifacts" template (Kubernetes v1.24.11-rc.0.11+73da4d3652771d, 1 control-plane machines, 2 worker machines) ... skipping 91 lines ... Jan 30 10:12:18.717: INFO: creating 1 resource(s) Jan 30 10:12:18.987: INFO: creating 18 resource(s) Jan 30 10:12:19.516: INFO: Install complete [1mSTEP:[0m Waiting for Ready csi-azuredisk-controller deployment pods [38;5;243m@ 01/30/23 10:12:19.547[0m [1mSTEP:[0m waiting for deployment kube-system/csi-azuredisk-controller to be available [38;5;243m@ 01/30/23 10:12:19.807[0m Jan 30 10:12:19.807: INFO: starting to wait for deployment to become available [38;5;9m[FAILED][0m in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:122 [38;5;243m@ 01/30/23 10:27:19.876[0m Jan 30 10:27:19.877: INFO: FAILED! Jan 30 10:27:19.877: INFO: Cleaning up after "Conformance Tests conformance-tests" spec [1mSTEP:[0m Dumping logs from the "capz-conf-thzjal" workload cluster [38;5;243m@ 01/30/23 10:27:19.877[0m Jan 30 10:27:19.877: INFO: Dumping workload cluster capz-conf-thzjal/capz-conf-thzjal logs Jan 30 10:27:19.925: INFO: Collecting logs for Linux node capz-conf-thzjal-control-plane-sdcsl in cluster capz-conf-thzjal in namespace capz-conf-thzjal Jan 30 10:27:32.244: INFO: Collecting boot logs for AzureMachine capz-conf-thzjal-control-plane-sdcsl ... skipping 40 lines ... Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container csi-resizer Jan 30 10:27:57.615: INFO: Describing Pod kube-system/csi-azuredisk-controller-7996479b64-hsfmv Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container csi-attacher Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container azuredisk Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container liveness-probe Jan 30 10:27:57.615: INFO: Creating log watcher for controller kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container csi-snapshotter Jan 30 10:27:57.699: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-controller-7996479b64-hsfmv, container azuredisk: container "azuredisk" in pod "csi-azuredisk-controller-7996479b64-hsfmv" is waiting to start: trying and failing to pull image Jan 30 10:27:58.015: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-56n7d, container liveness-probe Jan 30 10:27:58.015: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-56n7d, container node-driver-registrar Jan 30 10:27:58.015: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-56n7d, container azuredisk Jan 30 10:27:58.015: INFO: Describing Pod kube-system/csi-azuredisk-node-56n7d Jan 30 10:27:58.091: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-56n7d, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-56n7d" is waiting to start: trying and failing to pull image Jan 30 10:27:58.409: INFO: Describing Pod kube-system/csi-azuredisk-node-lmbxx Jan 30 10:27:58.409: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-lmbxx, container liveness-probe Jan 30 10:27:58.409: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-lmbxx, container node-driver-registrar Jan 30 10:27:58.409: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-lmbxx, container azuredisk Jan 30 10:27:58.483: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-lmbxx, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-lmbxx" is waiting to start: trying and failing to pull image Jan 30 10:27:58.809: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sjk2k, container node-driver-registrar Jan 30 10:27:58.809: INFO: Describing Pod kube-system/csi-azuredisk-node-sjk2k Jan 30 10:27:58.810: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sjk2k, container azuredisk Jan 30 10:27:58.810: INFO: Creating log watcher for controller kube-system/csi-azuredisk-node-sjk2k, container liveness-probe Jan 30 10:27:58.882: INFO: Error starting logs stream for pod kube-system/csi-azuredisk-node-sjk2k, container azuredisk: container "azuredisk" in pod "csi-azuredisk-node-sjk2k" is waiting to start: trying and failing to pull image Jan 30 10:27:59.208: INFO: Describing Pod kube-system/etcd-capz-conf-thzjal-control-plane-sdcsl Jan 30 10:27:59.208: INFO: Creating log watcher for controller kube-system/etcd-capz-conf-thzjal-control-plane-sdcsl, container etcd Jan 30 10:27:59.606: INFO: Creating log watcher for controller kube-system/kube-apiserver-capz-conf-thzjal-control-plane-sdcsl, container kube-apiserver Jan 30 10:27:59.606: INFO: Describing Pod kube-system/kube-apiserver-capz-conf-thzjal-control-plane-sdcsl Jan 30 10:28:00.007: INFO: Describing Pod kube-system/kube-controller-manager-capz-conf-thzjal-control-plane-sdcsl Jan 30 10:28:00.007: INFO: Creating log watcher for controller kube-system/kube-controller-manager-capz-conf-thzjal-control-plane-sdcsl, container kube-controller-manager ... skipping 18 lines ... INFO: Waiting for the Cluster capz-conf-thzjal/capz-conf-thzjal to be deleted [1mSTEP:[0m Waiting for cluster capz-conf-thzjal to be deleted [38;5;243m@ 01/30/23 10:28:07.444[0m Jan 30 10:33:47.647: INFO: Deleting namespace used for hosting the "conformance-tests" test spec INFO: Deleting namespace capz-conf-thzjal Jan 30 10:33:47.669: INFO: Checking if any resources are left over in Azure for spec "conformance-tests" [1mSTEP:[0m Redacting sensitive information from logs [38;5;243m@ 01/30/23 10:33:48.48[0m [38;5;9m• [FAILED] [1687.875 seconds][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100[0m [38;5;9m[FAILED] Timed out after 900.001s. Deployment kube-system/csi-azuredisk-controller failed Deployment: { "metadata": { "name": "csi-azuredisk-controller", "namespace": "kube-system", "uid": "aa520115-fce5-4b8a-83d0-97d61c80deeb", ... skipping 554 lines ... "image": "mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.6.0", "args": [ "-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s" ], "env": [ { "name": "ADDRESS", ... skipping 228 lines ... [0m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;10m[ReportAfterSuite] PASSED [0.006 seconds][0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0mConformance Tests [38;5;9m[1m[It] conformance-tests[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:122[0m [38;5;9m[1mRan 1 of 23 Specs in 1827.837 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m0 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m22 Skipped[0m --- FAIL: TestE2E (1827.85s) FAIL [38;5;228mYou're using deprecated Ginkgo functionality:[0m [38;5;228m=============================================[0m [38;5;11mCurrentGinkgoTestDescription() is deprecated in Ginkgo V2. Use CurrentSpecReport() instead.[0m [1mLearn more at:[0m [38;5;14m[4mhttps://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:278[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281[0m [38;5;243mTo silence deprecations that can be silenced set the following environment variable:[0m [38;5;243mACK_GINKGO_DEPRECATIONS=2.6.0[0m Ginkgo ran 1 suite in 33m2.744550565s Test Suite Failed make[2]: *** [Makefile:655: test-e2e-run] Error 1 make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make[1]: *** [Makefile:670: test-e2e-skip-push] Error 2 make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure' make: *** [Makefile:686: test-conformance] Error 2 ================ REDACTING LOGS ================ All sensitive variables are redacted + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ ... skipping 5 lines ...