Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 65 succeeded |
Started | |
Elapsed | 1h7m |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[It\]\s\[unmanaged\]\s\[functional\]\s\[ClusterClass\]\sMultitenancy\stest\s\[ClusterClass\]\sshould\screate\scluster\swith\snested\sassumed\srole$'
[FAILED] Timed out after 10.001s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000707878>: { error: <*exec.ExitError | 0xc0005fe300>{ ProcessState: { pid: 32617, status: 256, rusage: { Utime: {Sec: 0, Usec: 773819}, Stime: {Sec: 0, Usec: 361999}, Maxrss: 104032, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 13083, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25696, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 3240, Nivcsw: 1929, }, }, Stderr: nil, }, stack: [0x1be5460, 0x1be59d1, 0x1d5a40c, 0x2191c93, 0x4db565, 0x4daa5c, 0xa35e9a, 0xa36b0e, 0xa344cd, 0x219116c, 0x2265878, 0xa11f5b, 0xa26058, 0x4704e1], } exit status 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 @ 01/25/23 16:49:05.077from junit.e2e_suite.xml
Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancyjump" already exists Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancynested" already exists > Enter [BeforeEach] [unmanaged] [functional] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:47 @ 01/25/23 16:48:39.554 < Exit [BeforeEach] [unmanaged] [functional] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:47 @ 01/25/23 16:48:39.554 (0s) > Enter [It] should create cluster with nested assumed role - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:53 @ 01/25/23 16:48:39.554 STEP: Node 19 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:187 @ 01/25/23 16:48:39.575 STEP: Node 19 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:216 @ 01/25/23 16:48:40.576 STEP: Creating a namespace for hosting the "functional-multitenancy-nested-clusterclass" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:52 @ 01/25/23 16:48:40.576 INFO: Creating namespace functional-multitenancy-nested-clusterclass-brffsg INFO: Creating event watcher for namespace "functional-multitenancy-nested-clusterclass-brffsg" Jan 25 16:48:41.129: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_ROLE_ARN, value=arn:aws:iam::583069479333:role/CAPAMultiTenancySimple Jan 25 16:48:41.129: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_ROLE_NAME, value=CAPAMultiTenancySimple Jan 25 16:48:41.129: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_IDENTITY_NAME, value=capamultitenancysimple Jan 25 16:48:41.202: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_ROLE_ARN, value=arn:aws:iam::583069479333:role/CAPAMultiTenancyJump Jan 25 16:48:41.202: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_ROLE_NAME, value=CAPAMultiTenancyJump Jan 25 16:48:41.202: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_IDENTITY_NAME, value=capamultitenancyjump Jan 25 16:48:41.263: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_ROLE_ARN, value=arn:aws:iam::583069479333:role/CAPAMultiTenancyNested Jan 25 16:48:41.263: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_ROLE_NAME, value=CAPAMultiTenancyNested Jan 25 16:48:41.263: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_IDENTITY_NAME, value=capamultitenancynested STEP: Creating cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:64 @ 01/25/23 16:48:41.263 INFO: Creating the workload cluster with name "cluster-5d0zkd" using the "nested-multitenancy-clusterclass" template (Kubernetes v1.25.3, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster cluster-5d0zkd --infrastructure (default) --kubernetes-version v1.25.3 --control-plane-machine-count 1 --worker-machine-count 0 --flavor nested-multitenancy-clusterclass INFO: Applying the cluster template yaml to the cluster STEP: Dumping all the Cluster API resources in the "functional-multitenancy-nested-clusterclass-brffsg" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:68 @ 01/25/23 16:48:52.957 STEP: Dumping all EC2 instances in the "functional-multitenancy-nested-clusterclass-brffsg" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:72 @ 01/25/23 16:48:53.802 STEP: Deleting all clusters in the "functional-multitenancy-nested-clusterclass-brffsg" namespace with intervals ["20m" "10s"] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:76 @ 01/25/23 16:48:53.966 STEP: Deleting cluster cluster-5d0zkd - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/25/23 16:48:54.004 INFO: Waiting for the Cluster functional-multitenancy-nested-clusterclass-brffsg/cluster-5d0zkd to be deleted STEP: Waiting for cluster cluster-5d0zkd to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/25/23 16:48:54.04 STEP: Deleting namespace used for hosting the "" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:82 @ 01/25/23 16:49:04.055 INFO: Deleting namespace functional-multitenancy-nested-clusterclass-brffsg STEP: Node 19 released resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:269 @ 01/25/23 16:49:05.077 [FAILED] Timed out after 10.001s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000707878>: { error: <*exec.ExitError | 0xc0005fe300>{ ProcessState: { pid: 32617, status: 256, rusage: { Utime: {Sec: 0, Usec: 773819}, Stime: {Sec: 0, Usec: 361999}, Maxrss: 104032, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 13083, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25696, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 3240, Nivcsw: 1929, }, }, Stderr: nil, }, stack: [0x1be5460, 0x1be59d1, 0x1d5a40c, 0x2191c93, 0x4db565, 0x4daa5c, 0xa35e9a, 0xa36b0e, 0xa344cd, 0x219116c, 0x2265878, 0xa11f5b, 0xa26058, 0x4704e1], } exit status 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 @ 01/25/23 16:49:05.077 < Exit [It] should create cluster with nested assumed role - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:53 @ 01/25/23 16:49:05.077 (25.523s)
Filter through log files | View test history on testgrid
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - HA Control Plane Cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - HA control plane with scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - Single control plane with workers [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Clusterctl Upgrade Spec [from latest v1beta1 release to v1beta2] Should create a management cluster and then upgrade all the providers
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Pool Spec Should successfully create a cluster with machine pool machines
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Remediation Spec Should successfully trigger KCP remediation
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Remediation Spec Should successfully trigger machine deployment remediation
capa-e2e [It] [unmanaged] [Cluster API Framework] Self Hosted Spec Should pivot the bootstrap cluster to a self-hosted cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] Cluster Upgrade Spec - HA control plane with workers [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] ClusterClass Changes Spec - SSA immutability checks [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] Self Hosted Spec [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking] Running the quick-start spec Should create a workload cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking] Running the quick-start spec with ClusterClass Should create a workload cluster
capa-e2e [It] [unmanaged] [functional] CSI=external CCM=external AWSCSIMigration=on: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] CSI=external CCM=in-tree AWSCSIMigration=on: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] CSI=in-tree CCM=in-tree AWSCSIMigration=off: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] GPU-enabled cluster test should create cluster with single worker
capa-e2e [It] [unmanaged] [functional] MachineDeployment misconfigurations MachineDeployment misconfigurations
capa-e2e [It] [unmanaged] [functional] Multitenancy test should create cluster with nested assumed role
capa-e2e [It] [unmanaged] [functional] Workload cluster with AWS S3 and Ignition parameter It should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] Workload cluster with AWS SSM Parameter as the Secret Backend should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] Workload cluster with EFS driver should pass dynamic provisioning test
capa-e2e [It] [unmanaged] [functional] Workload cluster with spot instances should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Workload cluster with AWS SSM Parameter as the Secret Backend [ClusterClass] should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Workload cluster with external infrastructure [ClusterClass] should create workload cluster in external VPC
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [It] [unmanaged] [functional] External infrastructure, external security groups, VPC peering, internal ELB and private subnet use only should create external clusters in peered VPC and with an internal ELB and only utilize a private subnet
capa-e2e [It] [unmanaged] [functional] Multiple workload clusters Defining clusters in the same namespace should create the clusters
capa-e2e [It] [unmanaged] [functional] Multiple workload clusters in different namespaces with machine failures should setup namespaces correctly for the two clusters
capa-e2e [It] [unmanaged] [functional] [Serial] Upgrade to main branch Kubernetes in same namespace should create the clusters
... skipping 878 lines ... Jan 25 16:48:39.573: INFO: Setting environment variable: key=AWS_AVAILABILITY_ZONE_2, value=us-west-2b Jan 25 16:48:39.573: INFO: Setting environment variable: key=AWS_REGION, value=us-west-2 Jan 25 16:48:39.573: INFO: Setting environment variable: key=AWS_SSH_KEY_NAME, value=cluster-api-provider-aws-sigs-k8s-io Jan 25 16:48:39.573: INFO: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=******* [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [25.523 seconds][0m [0m[unmanaged] [functional] [ClusterClass] [38;5;243mMultitenancy test [ClusterClass] [38;5;9m[1m[It] should create cluster with nested assumed role[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:53[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancyjump" already exists Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancynested" already exists [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 19 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 16:48:39.575[0m [1mSTEP:[0m Node 19 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 16:48:40.576[0m ... skipping 20 lines ... [1mSTEP:[0m Deleting cluster cluster-5d0zkd [38;5;243m@ 01/25/23 16:48:54.004[0m INFO: Waiting for the Cluster functional-multitenancy-nested-clusterclass-brffsg/cluster-5d0zkd to be deleted [1mSTEP:[0m Waiting for cluster cluster-5d0zkd to be deleted [38;5;243m@ 01/25/23 16:48:54.04[0m [1mSTEP:[0m Deleting namespace used for hosting the "" test spec [38;5;243m@ 01/25/23 16:49:04.055[0m INFO: Deleting namespace functional-multitenancy-nested-clusterclass-brffsg [1mSTEP:[0m Node 19 released resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 16:49:05.077[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 [38;5;243m@ 01/25/23 16:49:05.077[0m [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 10.001s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000707878>: { error: <*exec.ExitError | 0xc0005fe300>{ ProcessState: { pid: 32617, status: 256, rusage: { Utime: {Sec: 0, Usec: 773819}, Stime: {Sec: 0, Usec: 361999}, ... skipping 1323 lines ... machinedeployment.cluster.x-k8s.io/self-hosted-tu6cj0-md-0 created awsmachinetemplate.infrastructure.cluster.x-k8s.io/self-hosted-tu6cj0-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-tu6cj0-md-0 created configmap/cni-self-hosted-tu6cj0-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/self-hosted-tu6cj0-crs-0 created W0125 17:14:56.730605 27572 reflector.go:347] pkg/mod/k8s.io/client-go@v0.25.0/tools/cache/reflector.go:169: watch of *v1.Event ended with: Internal error occurred: etcdserver: no leader [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 3 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 16:48:39.601[0m [1mSTEP:[0m Node 3 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 16:49:05.602[0m [1mSTEP:[0m Creating a namespace for hosting the "self-hosted" test spec [38;5;243m@ 01/25/23 16:49:05.602[0m ... skipping 79 lines ... awsmachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker-bootstraptemplate created cluster.cluster.x-k8s.io/self-hosted-7hq32n created configmap/cni-self-hosted-7hq32n-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/self-hosted-7hq32n-crs-0 created W0125 17:17:40.383840 27680 reflector.go:347] pkg/mod/k8s.io/client-go@v0.25.0/tools/cache/reflector.go:169: watch of *v1.Event ended with: Internal error occurred: etcdserver: no leader [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 16 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 17:02:01.374[0m [1mSTEP:[0m Node 16 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/25/23 17:04:57.375[0m [1mSTEP:[0m Creating a namespace for hosting the "self-hosted" test spec [38;5;243m@ 01/25/23 17:04:57.375[0m ... skipping 499 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.020 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0m[unmanaged] [functional] [ClusterClass] [38;5;243mMultitenancy test [ClusterClass] [38;5;9m[1m[It] should create cluster with nested assumed role[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308[0m [38;5;9m[1mRan 26 of 30 Specs in 3667.189 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m25 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m4 Pending[0m | [38;5;14m[1m0 Skipped[0m Ginkgo ran 1 suite in 1h4m10.765789889s Test Suite Failed real 64m10.859s user 30m8.200s sys 9m47.647s make: *** [Makefile:406: test-e2e] Error 1 + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ... skipping 3 lines ...