Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 65 succeeded |
Started | |
Elapsed | 1h10m |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[It\]\s\[unmanaged\]\s\[functional\]\s\[ClusterClass\]\sMultitenancy\stest\s\[ClusterClass\]\sshould\screate\scluster\swith\snested\sassumed\srole$'
[FAILED] Timed out after 10.000s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc0027235d8>: { error: <*exec.ExitError | 0xc00067bea0>{ ProcessState: { pid: 32151, status: 256, rusage: { Utime: {Sec: 0, Usec: 676142}, Stime: {Sec: 0, Usec: 276044}, Maxrss: 112812, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 13778, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25136, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 3139, Nivcsw: 1105, }, }, Stderr: nil, }, stack: [0x1be5460, 0x1be59d1, 0x1d5a40c, 0x2191c93, 0x4db565, 0x4daa5c, 0xa35e9a, 0xa36b0e, 0xa344cd, 0x219116c, 0x2265878, 0xa11f5b, 0xa26058, 0x4704e1], } exit status 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 @ 01/25/23 04:46:38.826from junit.e2e_suite.xml
Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancyjump" already exists Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancynested" already exists > Enter [BeforeEach] [unmanaged] [functional] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:47 @ 01/25/23 04:46:13.047 < Exit [BeforeEach] [unmanaged] [functional] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:47 @ 01/25/23 04:46:13.047 (0s) > Enter [It] should create cluster with nested assumed role - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:53 @ 01/25/23 04:46:13.047 STEP: Node 1 acquiring resources: {ec2-normal:2, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:187 @ 01/25/23 04:46:13.048 STEP: Node 1 acquired resources: {ec2-normal:2, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:216 @ 01/25/23 04:46:14.049 STEP: Creating a namespace for hosting the "functional-multitenancy-nested-clusterclass" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:52 @ 01/25/23 04:46:14.049 INFO: Creating namespace functional-multitenancy-nested-clusterclass-1wx8jb INFO: Creating event watcher for namespace "functional-multitenancy-nested-clusterclass-1wx8jb" Jan 25 04:46:14.282: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_ROLE_ARN, value=arn:aws:iam::325385070673:role/CAPAMultiTenancySimple Jan 25 04:46:14.282: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_ROLE_NAME, value=CAPAMultiTenancySimple Jan 25 04:46:14.282: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_IDENTITY_NAME, value=capamultitenancysimple Jan 25 04:46:14.337: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_ROLE_ARN, value=arn:aws:iam::325385070673:role/CAPAMultiTenancyJump Jan 25 04:46:14.337: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_ROLE_NAME, value=CAPAMultiTenancyJump Jan 25 04:46:14.337: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_IDENTITY_NAME, value=capamultitenancyjump Jan 25 04:46:14.395: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_ROLE_ARN, value=arn:aws:iam::325385070673:role/CAPAMultiTenancyNested Jan 25 04:46:14.395: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_ROLE_NAME, value=CAPAMultiTenancyNested Jan 25 04:46:14.395: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_IDENTITY_NAME, value=capamultitenancynested STEP: Creating cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:64 @ 01/25/23 04:46:14.395 INFO: Creating the workload cluster with name "cluster-70mztl" using the "nested-multitenancy-clusterclass" template (Kubernetes v1.25.3, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster cluster-70mztl --infrastructure (default) --kubernetes-version v1.25.3 --control-plane-machine-count 1 --worker-machine-count 0 --flavor nested-multitenancy-clusterclass INFO: Applying the cluster template yaml to the cluster STEP: Dumping all the Cluster API resources in the "functional-multitenancy-nested-clusterclass-1wx8jb" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:68 @ 01/25/23 04:46:26.535 STEP: Dumping all EC2 instances in the "functional-multitenancy-nested-clusterclass-1wx8jb" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:72 @ 01/25/23 04:46:27.589 STEP: Deleting all clusters in the "functional-multitenancy-nested-clusterclass-1wx8jb" namespace with intervals ["20m" "10s"] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:76 @ 01/25/23 04:46:27.741 STEP: Deleting cluster cluster-70mztl - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/25/23 04:46:27.768 INFO: Waiting for the Cluster functional-multitenancy-nested-clusterclass-1wx8jb/cluster-70mztl to be deleted STEP: Waiting for cluster cluster-70mztl to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/25/23 04:46:27.786 STEP: Deleting namespace used for hosting the "" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:82 @ 01/25/23 04:46:37.798 INFO: Deleting namespace functional-multitenancy-nested-clusterclass-1wx8jb STEP: Node 1 released resources: {ec2-normal:2, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:269 @ 01/25/23 04:46:38.825 [FAILED] Timed out after 10.000s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc0027235d8>: { error: <*exec.ExitError | 0xc00067bea0>{ ProcessState: { pid: 32151, status: 256, rusage: { Utime: {Sec: 0, Usec: 676142}, Stime: {Sec: 0, Usec: 276044}, Maxrss: 112812, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 13778, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25136, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 3139, Nivcsw: 1105, }, }, Stderr: nil, }, stack: [0x1be5460, 0x1be59d1, 0x1d5a40c, 0x2191c93, 0x4db565, 0x4daa5c, 0xa35e9a, 0xa36b0e, 0xa344cd, 0x219116c, 0x2265878, 0xa11f5b, 0xa26058, 0x4704e1], } exit status 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 @ 01/25/23 04:46:38.826 < Exit [It] should create cluster with nested assumed role - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:53 @ 01/25/23 04:46:38.826 (25.778s)
Filter through log files | View test history on testgrid
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - HA Control Plane Cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - HA control plane with scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - Single control plane with workers [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Clusterctl Upgrade Spec [from latest v1beta1 release to v1beta2] Should create a management cluster and then upgrade all the providers
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Pool Spec Should successfully create a cluster with machine pool machines
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Remediation Spec Should successfully trigger KCP remediation
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Remediation Spec Should successfully trigger machine deployment remediation
capa-e2e [It] [unmanaged] [Cluster API Framework] Self Hosted Spec Should pivot the bootstrap cluster to a self-hosted cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] Cluster Upgrade Spec - HA control plane with workers [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] ClusterClass Changes Spec - SSA immutability checks [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] Self Hosted Spec [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking] Running the quick-start spec Should create a workload cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking] Running the quick-start spec with ClusterClass Should create a workload cluster
capa-e2e [It] [unmanaged] [functional] CSI=external CCM=external AWSCSIMigration=on: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] CSI=external CCM=in-tree AWSCSIMigration=on: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] CSI=in-tree CCM=in-tree AWSCSIMigration=off: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] GPU-enabled cluster test should create cluster with single worker
capa-e2e [It] [unmanaged] [functional] MachineDeployment misconfigurations MachineDeployment misconfigurations
capa-e2e [It] [unmanaged] [functional] Multitenancy test should create cluster with nested assumed role
capa-e2e [It] [unmanaged] [functional] Workload cluster with AWS S3 and Ignition parameter It should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] Workload cluster with AWS SSM Parameter as the Secret Backend should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] Workload cluster with EFS driver should pass dynamic provisioning test
capa-e2e [It] [unmanaged] [functional] Workload cluster with spot instances should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Workload cluster with AWS SSM Parameter as the Secret Backend [ClusterClass] should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Workload cluster with external infrastructure [ClusterClass] should create workload cluster in external VPC
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [It] [unmanaged] [functional] External infrastructure, external security groups, VPC peering, internal ELB and private subnet use only should create external clusters in peered VPC and with an internal ELB and only utilize a private subnet
capa-e2e [It] [unmanaged] [functional] Multiple workload clusters Defining clusters in the same namespace should create the clusters
capa-e2e [It] [unmanaged] [functional] Multiple workload clusters in different namespaces with machine failures should setup namespaces correctly for the two clusters
capa-e2e [It] [unmanaged] [functional] [Serial] Upgrade to main branch Kubernetes in same namespace should create the clusters
... skipping 881 lines ... Jan 25 04:46:13.216: INFO: Setting environment variable: key=AWS_AVAILABILITY_ZONE_2, value=us-west-2b Jan 25 04:46:13.216: INFO: Setting environment variable: key=AWS_REGION, value=us-west-2 Jan 25 04:46:13.216: INFO: Setting environment variable: key=AWS_SSH_KEY_NAME, value=cluster-api-provider-aws-sigs-k8s-io Jan 25 04:46:13.216: INFO: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=******* [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [25.779 seconds][0m [0m[unmanaged] [functional] [ClusterClass] [38;5;243mMultitenancy test [ClusterClass] [38;5;9m[1m[It] should create cluster with nested assumed role[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:53[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancyjump" already exists Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancynested" already exists [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 1 acquiring resources: {ec2-normal:2, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 04:46:13.048[0m [1mSTEP:[0m Node 1 acquired resources: {ec2-normal:2, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 04:46:14.049[0m ... skipping 20 lines ... [1mSTEP:[0m Deleting cluster cluster-70mztl [38;5;243m@ 01/25/23 04:46:27.768[0m INFO: Waiting for the Cluster functional-multitenancy-nested-clusterclass-1wx8jb/cluster-70mztl to be deleted [1mSTEP:[0m Waiting for cluster cluster-70mztl to be deleted [38;5;243m@ 01/25/23 04:46:27.786[0m [1mSTEP:[0m Deleting namespace used for hosting the "" test spec [38;5;243m@ 01/25/23 04:46:37.798[0m INFO: Deleting namespace functional-multitenancy-nested-clusterclass-1wx8jb [1mSTEP:[0m Node 1 released resources: {ec2-normal:2, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 04:46:38.825[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 [38;5;243m@ 01/25/23 04:46:38.826[0m [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 10.000s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc0027235d8>: { error: <*exec.ExitError | 0xc00067bea0>{ ProcessState: { pid: 32151, status: 256, rusage: { Utime: {Sec: 0, Usec: 676142}, Stime: {Sec: 0, Usec: 276044}, ... skipping 1128 lines ... awsmachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker-bootstraptemplate created cluster.cluster.x-k8s.io/self-hosted-ijo2sz created configmap/cni-self-hosted-ijo2sz-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/self-hosted-ijo2sz-crs-0 created W0125 05:02:46.317266 27138 reflector.go:347] pkg/mod/k8s.io/client-go@v0.25.0/tools/cache/reflector.go:169: watch of *v1.Event ended with: Internal error occurred: etcdserver: no leader [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 10 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 04:46:13.236[0m [1mSTEP:[0m Node 10 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 04:46:14.238[0m [1mSTEP:[0m Creating a namespace for hosting the "self-hosted" test spec [38;5;243m@ 01/25/23 04:46:14.239[0m ... skipping 280 lines ... machinedeployment.cluster.x-k8s.io/self-hosted-7xbdl2-md-0 created awsmachinetemplate.infrastructure.cluster.x-k8s.io/self-hosted-7xbdl2-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-7xbdl2-md-0 created configmap/cni-self-hosted-7xbdl2-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/self-hosted-7xbdl2-crs-0 created W0125 05:12:54.932246 27257 reflector.go:347] pkg/mod/k8s.io/client-go@v0.25.0/tools/cache/reflector.go:169: watch of *v1.Event ended with: Internal error occurred: etcdserver: no leader [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 20 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 04:55:42.74[0m [1mSTEP:[0m Node 20 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 04:59:05.742[0m [1mSTEP:[0m Creating a namespace for hosting the "self-hosted" test spec [38;5;243m@ 01/25/23 04:59:05.742[0m ... skipping 496 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.018 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0m[unmanaged] [functional] [ClusterClass] [38;5;243mMultitenancy test [ClusterClass] [38;5;9m[1m[It] should create cluster with nested assumed role[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308[0m [38;5;9m[1mRan 26 of 30 Specs in 3963.320 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m25 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m4 Pending[0m | [38;5;14m[1m0 Skipped[0m Ginkgo ran 1 suite in 1h7m46.017297846s Test Suite Failed real 67m46.090s user 22m43.895s sys 5m42.154s make: *** [Makefile:406: test-e2e] Error 1 + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ... skipping 3 lines ...