This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 6 succeeded
Started2022-09-22 03:22
Elapsed38m59s
Revisionrelease-1.0

Test Failures


capi-e2e When testing clusterctl upgrades [clusterctl-Upgrade] Should create a management cluster and then upgrade all the providers 8m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\stesting\sclusterctl\supgrades\s\[clusterctl\-Upgrade\]\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade.go:146
Expected success, but got an error:
    <errors.aggregate | len:1, cap:1>: [
        <*errors.StatusError | 0xc000eb5680>{
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {
                    SelfLink: "",
                    ResourceVersion: "",
                    Continue: "",
                    RemainingItemCount: nil,
                },
                Status: "Failure",
                Message: "Internal error occurred: failed calling webhook \"default.machinedeployment.cluster.x-k8s.io\": failed to call webhook: Post \"https://capi-webhook-service.capi-system.svc:443/mutate-cluster-x-k8s-io-v1beta1-machinedeployment?timeout=10s\": dial tcp 10.142.123.236:443: connect: connection refused",
                Reason: "InternalError",
                Details: {
                    Name: "",
                    Group: "",
                    Kind: "",
                    UID: "",
                    Causes: [
                        {
                            Type: "",
                            Message: "failed calling webhook \"default.machinedeployment.cluster.x-k8s.io\": failed to call webhook: Post \"https://capi-webhook-service.capi-system.svc:443/mutate-cluster-x-k8s-io-v1beta1-machinedeployment?timeout=10s\": dial tcp 10.142.123.236:443: connect: connection refused",
                            Field: "",
                        },
                    ],
                    RetryAfterSeconds: 0,
                },
                Code: 500,
            },
        },
    ]
    Internal error occurred: failed calling webhook "default.machinedeployment.cluster.x-k8s.io": failed to call webhook: Post "https://capi-webhook-service.capi-system.svc:443/mutate-cluster-x-k8s-io-v1beta1-machinedeployment?timeout=10s": dial tcp 10.142.123.236:443: connect: connection refused
/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinedeployment_helpers.go:314
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 6 Passed Tests

Show 8 Skipped Tests

Error lines from build-log.txt

... skipping 840 lines ...

STEP: Waiting for the control plane to be ready
STEP: Taking stable ownership of the Machines
STEP: Taking ownership of the cluster's PKI material
STEP: PASSED!
STEP: Dumping logs from the "kcp-adoption-xfckwb" workload cluster
Failed to get logs for machine kcp-adoption-xfckwb-control-plane-0, cluster kcp-adoption-ybtgap/kcp-adoption-xfckwb: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-adoption-ybtgap" namespace
STEP: Deleting cluster kcp-adoption-ybtgap/kcp-adoption-xfckwb
STEP: Deleting cluster kcp-adoption-xfckwb
INFO: Waiting for the Cluster kcp-adoption-ybtgap/kcp-adoption-xfckwb to be deleted
STEP: Waiting for cluster kcp-adoption-xfckwb to be deleted
STEP: Deleting namespace used for hosting the "kcp-adoption" test spec
... skipping 44 lines ...
INFO: Waiting for correct number of replicas to exist
STEP: Scaling the MachineDeployment down to 1
INFO: Scaling machine deployment md-scale-c7xvve/md-scale-zjq1u5-md-0 from 3 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "md-scale-zjq1u5" workload cluster
Failed to get logs for machine md-scale-zjq1u5-control-plane-9fpqj, cluster md-scale-c7xvve/md-scale-zjq1u5: exit status 2
Failed to get logs for machine md-scale-zjq1u5-md-0-65c84b89ff-c88cd, cluster md-scale-c7xvve/md-scale-zjq1u5: exit status 2
STEP: Dumping all the Cluster API resources in the "md-scale-c7xvve" namespace
STEP: Deleting cluster md-scale-c7xvve/md-scale-zjq1u5
STEP: Deleting cluster md-scale-zjq1u5
INFO: Waiting for the Cluster md-scale-c7xvve/md-scale-zjq1u5 to be deleted
STEP: Waiting for cluster md-scale-zjq1u5 to be deleted
STEP: Deleting namespace used for hosting the "md-scale" test spec
... skipping 54 lines ...
STEP: Waiting for deployment node-drain-t97nle-unevictable-workload/unevictable-pod-vw6 to be available
STEP: Scale down the controlplane of the workload cluster and make sure that nodes running workload can be deleted even the draining process is blocked.
INFO: Scaling controlplane node-drain-t97nle/node-drain-m7vc3z-control-plane from 0xc000abae80 to 1 replicas
INFO: Waiting for correct number of replicas to exist
STEP: PASSED!
STEP: Dumping logs from the "node-drain-m7vc3z" workload cluster
Failed to get logs for machine node-drain-m7vc3z-control-plane-w6x4n, cluster node-drain-t97nle/node-drain-m7vc3z: exit status 2
STEP: Dumping all the Cluster API resources in the "node-drain-t97nle" namespace
STEP: Deleting cluster node-drain-t97nle/node-drain-m7vc3z
STEP: Deleting cluster node-drain-m7vc3z
INFO: Waiting for the Cluster node-drain-t97nle/node-drain-m7vc3z to be deleted
STEP: Waiting for cluster node-drain-m7vc3z to be deleted
STEP: Deleting namespace used for hosting the "node-drain" test spec
... skipping 38 lines ...
STEP: Waiting for the control plane to be ready
INFO: Waiting for the machine deployments to be provisioned
STEP: Waiting for the workload nodes to exist
INFO: Waiting for the machine pools to be provisioned
STEP: PASSED!
STEP: Dumping logs from the "quick-start-0gidoi" workload cluster
Failed to get logs for machine quick-start-0gidoi-control-plane-nfrdt, cluster quick-start-0x7o5f/quick-start-0gidoi: exit status 2
Failed to get logs for machine quick-start-0gidoi-md-0-8554588949-t6s56, cluster quick-start-0x7o5f/quick-start-0gidoi: exit status 2
STEP: Dumping all the Cluster API resources in the "quick-start-0x7o5f" namespace
STEP: Deleting cluster quick-start-0x7o5f/quick-start-0gidoi
STEP: Deleting cluster quick-start-0gidoi
INFO: Waiting for the Cluster quick-start-0x7o5f/quick-start-0gidoi to be deleted
STEP: Waiting for cluster quick-start-0gidoi to be deleted
STEP: Deleting namespace used for hosting the "quick-start" test spec
... skipping 102 lines ...
• Failure [519.836 seconds]
When testing clusterctl upgrades [clusterctl-Upgrade]
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade_test.go:25
  Should create a management cluster and then upgrade all the providers [It]
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade.go:146

  Expected success, but got an error:
      <errors.aggregate | len:1, cap:1>: [
          <*errors.StatusError | 0xc000eb5680>{
              ErrStatus: {
                  TypeMeta: {Kind: "", APIVersion: ""},
                  ListMeta: {
                      SelfLink: "",
                      ResourceVersion: "",
                      Continue: "",
                      RemainingItemCount: nil,
                  },
                  Status: "Failure",
                  Message: "Internal error occurred: failed calling webhook \"default.machinedeployment.cluster.x-k8s.io\": failed to call webhook: Post \"https://capi-webhook-service.capi-system.svc:443/mutate-cluster-x-k8s-io-v1beta1-machinedeployment?timeout=10s\": dial tcp 10.142.123.236:443: connect: connection refused",
                  Reason: "InternalError",
                  Details: {
                      Name: "",
                      Group: "",
                      Kind: "",
                      UID: "",
                      Causes: [
                          {
                              Type: "",
                              Message: "failed calling webhook \"default.machinedeployment.cluster.x-k8s.io\": failed to call webhook: Post \"https://capi-webhook-service.capi-system.svc:443/mutate-cluster-x-k8s-io-v1beta1-machinedeployment?timeout=10s\": dial tcp 10.142.123.236:443: connect: connection refused",
                              Field: "",
                          },
                      ],
                      RetryAfterSeconds: 0,
                  },
                  Code: 500,
              },
          },
      ]
      Internal error occurred: failed calling webhook "default.machinedeployment.cluster.x-k8s.io": failed to call webhook: Post "https://capi-webhook-service.capi-system.svc:443/mutate-cluster-x-k8s-io-v1beta1-machinedeployment?timeout=10s": dial tcp 10.142.123.236:443: connect: connection refused

  /home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinedeployment_helpers.go:314

  Full Stack Trace
  sigs.k8s.io/cluster-api/test/framework.ScaleAndWaitMachineDeployment(0x22de868, 0xc000416c80, 0x22fa080, 0xc000947640, 0xc0001d2e00, 0xc001142b00, 0x2, 0xc001140500, 0x2, 0x2)
  	/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinedeployment_helpers.go:314 +0x5e5
... skipping 73 lines ...
STEP: Ensuring kube-proxy has the correct image
INFO: Waiting for CoreDNS to have the upgraded image tag
STEP: Ensuring CoreDNS has the correct image
INFO: Waiting for etcd to have the upgraded image tag
STEP: PASSED!
STEP: Dumping logs from the "kcp-upgrade-19h4x0" workload cluster
Failed to get logs for machine kcp-upgrade-19h4x0-control-plane-cqg4z, cluster kcp-upgrade-9f51bg/kcp-upgrade-19h4x0: exit status 2
Failed to get logs for machine kcp-upgrade-19h4x0-control-plane-hsmcr, cluster kcp-upgrade-9f51bg/kcp-upgrade-19h4x0: exit status 2
Failed to get logs for machine kcp-upgrade-19h4x0-control-plane-smwzw, cluster kcp-upgrade-9f51bg/kcp-upgrade-19h4x0: exit status 2
Failed to get logs for machine kcp-upgrade-19h4x0-md-0-787d6f9f6c-g4l4g, cluster kcp-upgrade-9f51bg/kcp-upgrade-19h4x0: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-9f51bg" namespace
STEP: Deleting cluster kcp-upgrade-9f51bg/kcp-upgrade-19h4x0
STEP: Deleting cluster kcp-upgrade-19h4x0
INFO: Waiting for the Cluster kcp-upgrade-9f51bg/kcp-upgrade-19h4x0 to be deleted
STEP: Waiting for cluster kcp-upgrade-19h4x0 to be deleted
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
... skipping 49 lines ...
STEP: Ensuring kube-proxy has the correct image
INFO: Waiting for CoreDNS to have the upgraded image tag
STEP: Ensuring CoreDNS has the correct image
INFO: Waiting for etcd to have the upgraded image tag
STEP: PASSED!
STEP: Dumping logs from the "kcp-upgrade-s0u1q7" workload cluster
Failed to get logs for machine kcp-upgrade-s0u1q7-control-plane-cqj8s, cluster kcp-upgrade-tm7tbm/kcp-upgrade-s0u1q7: exit status 2
Failed to get logs for machine kcp-upgrade-s0u1q7-control-plane-km8zx, cluster kcp-upgrade-tm7tbm/kcp-upgrade-s0u1q7: exit status 2
Failed to get logs for machine kcp-upgrade-s0u1q7-control-plane-xl544, cluster kcp-upgrade-tm7tbm/kcp-upgrade-s0u1q7: exit status 2
Failed to get logs for machine kcp-upgrade-s0u1q7-md-0-64fff658bd-d7tpb, cluster kcp-upgrade-tm7tbm/kcp-upgrade-s0u1q7: exit status 2
STEP: Dumping all the Cluster API resources in the "kcp-upgrade-tm7tbm" namespace
STEP: Deleting cluster kcp-upgrade-tm7tbm/kcp-upgrade-s0u1q7
STEP: Deleting cluster kcp-upgrade-s0u1q7
INFO: Waiting for the Cluster kcp-upgrade-tm7tbm/kcp-upgrade-s0u1q7 to be deleted
STEP: Waiting for cluster kcp-upgrade-s0u1q7 to be deleted
STEP: Deleting namespace used for hosting the "kcp-upgrade" test spec
... skipping 4 lines ...
When testing KCP upgrade in a HA cluster
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/kcp_upgrade_test.go:41
  Should successfully upgrade Kubernetes, DNS, kube-proxy, and etcd
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/kcp_upgrade.go:75
------------------------------
STEP: Dumping logs from the bootstrap cluster
Failed to get logs for the bootstrap cluster node test-2bqtnj-control-plane: exit status 2
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] When testing clusterctl upgrades [clusterctl-Upgrade] [It] Should create a management cluster and then upgrade all the providers 
/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/machinedeployment_helpers.go:314

Ran 7 of 15 Specs in 1929.402 seconds
FAIL! -- 6 Passed | 1 Failed | 0 Pending | 8 Skipped


Ginkgo ran 1 suite in 34m21.659194234s
Test Suite Failed
make: *** [Makefile:107: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 14165
++ pgrep -f 'ctr -n moby events'
+ kill 14166
... skipping 25 lines ...