This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 15 succeeded
Started2022-09-29 06:59
Elapsed23m4s
Revisionmain

Test Failures


capi-e2e [It] When testing ClusterClass changes [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass 2m40s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\s\[It\]\sWhen\stesting\sClusterClass\schanges\s\[ClusterClass\]\sShould\ssuccessfully\srollout\sthe\smanaged\stopology\supon\schanges\sto\sthe\sClusterClass$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:423
sigs.k8s.io/cluster-api/test/e2e.rebaseClusterClassAndWait({0x26000c8?, 0xc000510200}, {{0x260e548, 0xc000223ec0}, 0xc000a11800, 0xc000414d00, {0xc000651540, 0x2, 0x2}})
	/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:423 +0x62d
sigs.k8s.io/cluster-api/test/e2e.ClusterClassChangesSpec.func2()
	/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:174 +0x8c8
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 15 Passed Tests

Show 11 Skipped Tests

Error lines from build-log.txt

... skipping 832 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 78 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="" -skip="\[Conformance\]" -skip="\[K8s-Upgrade\]|\[IPv6\]" --nodes=3 --timeout=2h --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading k8s.io/apimachinery v0.25.0
go: downloading github.com/blang/semver v3.5.1+incompatible
go: downloading github.com/onsi/gomega v1.20.0
... skipping 206 lines ...
    machinedeployment.cluster.x-k8s.io/quick-start-wkmiey-md-0 created
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/quick-start-wkmiey-control-plane created
    dockercluster.infrastructure.cluster.x-k8s.io/quick-start-wkmiey created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-wkmiey-control-plane created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-wkmiey-md-0 created

    Failed to get logs for Machine quick-start-wkmiey-control-plane-shhxc, Cluster quick-start-wi2vyp/quick-start-wkmiey: exit status 2
    Failed to get logs for Machine quick-start-wkmiey-md-0-85f4c64676-q7rng, Cluster quick-start-wi2vyp/quick-start-wkmiey: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "quick-start" test spec 09/29/22 07:09:12.589
    INFO: Creating namespace quick-start-wi2vyp
    INFO: Creating event watcher for namespace "quick-start-wi2vyp"
... skipping 45 lines ...
    machinehealthcheck.cluster.x-k8s.io/mhc-remediation-gqn7hl-mhc-0 created
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/mhc-remediation-gqn7hl-control-plane created
    dockercluster.infrastructure.cluster.x-k8s.io/mhc-remediation-gqn7hl created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/mhc-remediation-gqn7hl-control-plane created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/mhc-remediation-gqn7hl-md-0 created

    Failed to get logs for Machine mhc-remediation-gqn7hl-control-plane-446hg, Cluster mhc-remediation-2yj3fc/mhc-remediation-gqn7hl: exit status 2
    Failed to get logs for Machine mhc-remediation-gqn7hl-md-0-7cf5988858-g6d4h, Cluster mhc-remediation-2yj3fc/mhc-remediation-gqn7hl: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "mhc-remediation" test spec 09/29/22 07:09:12.61
    INFO: Creating namespace mhc-remediation-2yj3fc
    INFO: Creating event watcher for namespace "mhc-remediation-2yj3fc"
... skipping 51 lines ...
    machinedeployment.cluster.x-k8s.io/md-rollout-91ra82-md-0 created
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/md-rollout-91ra82-control-plane created
    dockercluster.infrastructure.cluster.x-k8s.io/md-rollout-91ra82 created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/md-rollout-91ra82-control-plane created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/md-rollout-91ra82-md-0 created

    Failed to get logs for Machine md-rollout-91ra82-control-plane-lcmqg, Cluster md-rollout-ezm14w/md-rollout-91ra82: exit status 2
    Failed to get logs for Machine md-rollout-91ra82-md-0-8575567996-ps4fc, Cluster md-rollout-ezm14w/md-rollout-91ra82: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "md-rollout" test spec 09/29/22 07:09:12.579
    INFO: Creating namespace md-rollout-ezm14w
    INFO: Creating event watcher for namespace "md-rollout-ezm14w"
... skipping 53 lines ...

    cluster.cluster.x-k8s.io/kcp-adoption-ht0ozn configured
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/kcp-adoption-ht0ozn-control-plane created
    dockercluster.infrastructure.cluster.x-k8s.io/kcp-adoption-ht0ozn configured
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/kcp-adoption-ht0ozn-control-plane created

    Failed to get logs for Machine kcp-adoption-ht0ozn-control-plane-0, Cluster kcp-adoption-tcn3h2/kcp-adoption-ht0ozn: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "kcp-adoption" test spec 09/29/22 07:11:58.387
    INFO: Creating namespace kcp-adoption-tcn3h2
    INFO: Creating event watcher for namespace "kcp-adoption-tcn3h2"
... skipping 36 lines ...
    machinedeployment.cluster.x-k8s.io/md-scale-sa120p-md-0 created
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/md-scale-sa120p-control-plane created
    dockercluster.infrastructure.cluster.x-k8s.io/md-scale-sa120p created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/md-scale-sa120p-control-plane created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/md-scale-sa120p-md-0 created

    Failed to get logs for Machine md-scale-sa120p-control-plane-g6744, Cluster md-scale-4pmv6d/md-scale-sa120p: exit status 2
    Failed to get logs for Machine md-scale-sa120p-md-0-7fdbcbf895-9b9wk, Cluster md-scale-4pmv6d/md-scale-sa120p: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "md-scale" test spec 09/29/22 07:12:28.396
    INFO: Creating namespace md-scale-4pmv6d
    INFO: Creating event watcher for namespace "md-scale-4pmv6d"
... skipping 51 lines ...
    machinehealthcheck.cluster.x-k8s.io/mhc-remediation-qjt6gs-mhc-0 created
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/mhc-remediation-qjt6gs-control-plane created
    dockercluster.infrastructure.cluster.x-k8s.io/mhc-remediation-qjt6gs created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/mhc-remediation-qjt6gs-control-plane created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/mhc-remediation-qjt6gs-md-0 created

    Failed to get logs for Machine mhc-remediation-qjt6gs-control-plane-62qb7, Cluster mhc-remediation-su16sl/mhc-remediation-qjt6gs: exit status 2
    Failed to get logs for Machine mhc-remediation-qjt6gs-control-plane-fs285, Cluster mhc-remediation-su16sl/mhc-remediation-qjt6gs: exit status 2
    Failed to get logs for Machine mhc-remediation-qjt6gs-control-plane-fwwck, Cluster mhc-remediation-su16sl/mhc-remediation-qjt6gs: exit status 2
    Failed to get logs for Machine mhc-remediation-qjt6gs-md-0-5b8c5f7969-lvhb7, Cluster mhc-remediation-su16sl/mhc-remediation-qjt6gs: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "mhc-remediation" test spec 09/29/22 07:11:09.33
    INFO: Creating namespace mhc-remediation-su16sl
    INFO: Creating event watcher for namespace "mhc-remediation-su16sl"
... skipping 55 lines ...
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created
    kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created
    configmap/cni-quick-start-ga3lj1-crs-0 created
    clusterresourceset.addons.cluster.x-k8s.io/quick-start-ga3lj1-crs-0 created
    cluster.cluster.x-k8s.io/quick-start-ga3lj1 created

    Failed to get logs for Machine quick-start-ga3lj1-gz78p-d8v5z, Cluster quick-start-vaocdt/quick-start-ga3lj1: exit status 2
    Failed to get logs for Machine quick-start-ga3lj1-md-0-jj6qf-6f899797f6-x5qhw, Cluster quick-start-vaocdt/quick-start-ga3lj1: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "quick-start" test spec 09/29/22 07:17:10.501
    INFO: Creating namespace quick-start-vaocdt
    INFO: Creating event watcher for namespace "quick-start-vaocdt"
... skipping 27 lines ...
  << End Captured GinkgoWriter Output
------------------------------
When testing ClusterClass changes [ClusterClass]
  Should successfully rollout the managed topology upon changes to the ClusterClass
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:132
------------------------------
• [FAILED] [160.935 seconds]
When testing ClusterClass changes [ClusterClass]
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes_test.go:26
  [It] Should successfully rollout the managed topology upon changes to the ClusterClass
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:132

  Begin Captured StdOut/StdErr Output >>
... skipping 4 lines ...
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created
    kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created
    configmap/cni-clusterclass-changes-3yt6dr-crs-0 created
    clusterresourceset.addons.cluster.x-k8s.io/clusterclass-changes-3yt6dr-crs-0 created
    cluster.cluster.x-k8s.io/clusterclass-changes-3yt6dr created

    Failed to get logs for Machine clusterclass-changes-3yt6dr-5zk6j-rnnn6, Cluster clusterclass-changes-5meivf/clusterclass-changes-3yt6dr: exit status 2
    Failed to get logs for Machine clusterclass-changes-3yt6dr-md-0-v227j-765479bfd-6l8xh, Cluster clusterclass-changes-5meivf/clusterclass-changes-3yt6dr: exit status 2
    Failed to get logs for Machine clusterclass-changes-3yt6dr-md-0-v227j-7dd57b8cd4-7hkjc, Cluster clusterclass-changes-5meivf/clusterclass-changes-3yt6dr: exited with status: 2, &{%!s(*os.file=&{{{0 0 0} 16 {0} <nil> 0 1 true true true} /tmp/clusterclass-changes-3yt6dr-md-0-v227j-7dd57b8cd4-7hkjc3090197654 <nil> false false false})}
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "clusterclass-changes" test spec 09/29/22 07:18:54.87
    INFO: Creating namespace clusterclass-changes-5meivf
    INFO: Creating event watcher for namespace "clusterclass-changes-5meivf"
... skipping 30 lines ...
    INFO: Waiting for the Cluster clusterclass-changes-5meivf/clusterclass-changes-3yt6dr to be deleted
    STEP: Waiting for cluster clusterclass-changes-3yt6dr to be deleted 09/29/22 07:21:05.65
    STEP: Deleting namespace used for hosting the "clusterclass-changes" test spec 09/29/22 07:21:35.733
    INFO: Deleting namespace clusterclass-changes-5meivf
  << End Captured GinkgoWriter Output

  Expected success, but got an error:
      <errors.aggregate | len:1, cap:1>: [
          <*errors.StatusError | 0xc00063eb40>{
              ErrStatus: {
                  TypeMeta: {Kind: "", APIVersion: ""},
                  ListMeta: {
                      SelfLink: "",
                      ResourceVersion: "",
                      Continue: "",
                      RemainingItemCount: nil,
                  },
                  Status: "Failure",
                  Message: "admission webhook \"default.cluster.cluster.x-k8s.io\" denied the request: Internal error occurred: Cluster clusterclass-changes-3yt6dr can't be validated. ClusterClass quick-start-bimolu can not be retrieved: ClusterClass.cluster.x-k8s.io \"quick-start-bimolu\" not found",
                  Reason: "InternalError",
                  Details: {
                      Name: "",
                      Group: "",
                      Kind: "",
                      UID: "",
... skipping 7 lines ...
                      RetryAfterSeconds: 0,
                  },
                  Code: 500,
              },
          },
      ]
      admission webhook "default.cluster.cluster.x-k8s.io" denied the request: Internal error occurred: Cluster clusterclass-changes-3yt6dr can't be validated. ClusterClass quick-start-bimolu can not be retrieved: ClusterClass.cluster.x-k8s.io "quick-start-bimolu" not found
  In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:423

  Full Stack Trace
    sigs.k8s.io/cluster-api/test/e2e.rebaseClusterClassAndWait({0x26000c8?, 0xc000510200}, {{0x260e548, 0xc000223ec0}, 0xc000a11800, 0xc000414d00, {0xc000651540, 0x2, 0x2}})
    	/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:423 +0x62d
    sigs.k8s.io/cluster-api/test/e2e.ClusterClassChangesSpec.func2()
... skipping 26 lines ...
    machinedeployment.cluster.x-k8s.io/node-drain-0nl4cf-md-0 created
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/node-drain-0nl4cf-control-plane created
    dockercluster.infrastructure.cluster.x-k8s.io/node-drain-0nl4cf created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/node-drain-0nl4cf-control-plane created
    dockermachinetemplate.infrastructure.cluster.x-k8s.io/node-drain-0nl4cf-md-0 created

    Failed to get logs for Machine node-drain-0nl4cf-control-plane-4svfg, Cluster node-drain-hziem1/node-drain-0nl4cf: exit status 2
    Failed to get logs for Machine node-drain-0nl4cf-control-plane-mmrqp, Cluster node-drain-hziem1/node-drain-0nl4cf: exit status 2
    Failed to get logs for Machine node-drain-0nl4cf-control-plane-q9646, Cluster node-drain-hziem1/node-drain-0nl4cf: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "node-drain" test spec 09/29/22 07:15:02.647
    INFO: Creating namespace node-drain-hziem1
    INFO: Creating event watcher for namespace "node-drain-hziem1"
... skipping 187 lines ...
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
------------------------------


Summarizing 4 Failures:
  [FAIL] When testing ClusterClass changes [ClusterClass] [It] Should successfully rollout the managed topology upon changes to the ClusterClass
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterclass_changes.go:423
  [INTERRUPTED] When testing clusterctl upgrades [clusterctl-Upgrade] [It] Should create a management cluster and then upgrade all the providers
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade.go:152
  [INTERRUPTED] When testing node drain timeout [It] A node should be forcefully removed if it cannot be drained in time
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/node_drain_timeout.go:83
  [INTERRUPTED] [SynchronizedAfterSuite] 
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:169

Ran 10 of 21 Specs in 933.070 seconds
FAIL! - Interrupted by Other Ginkgo Process -- 7 Passed | 3 Failed | 0 Pending | 11 Skipped


Ginkgo ran 1 suite in 17m21.91800139s

Test Suite Failed
make: *** [Makefile:129: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 26214
++ pgrep -f 'ctr -n moby events'
+ kill 26215
... skipping 27 lines ...