Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 65 succeeded |
Started | |
Elapsed | 1h7m |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[It\]\s\[unmanaged\]\s\[functional\]\sMultitenancy\stest\sshould\screate\scluster\swith\snested\sassumed\srole$'
[FAILED] Timed out after 10.001s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000549728>: { error: <*exec.ExitError | 0xc000658ce0>{ ProcessState: { pid: 32081, status: 256, rusage: { Utime: {Sec: 0, Usec: 618109}, Stime: {Sec: 0, Usec: 245719}, Maxrss: 103076, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 17363, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25136, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 2560, Nivcsw: 1744, }, }, Stderr: nil, }, stack: [0x1be46e0, 0x1be4c51, 0x1d5968c, 0x2190f13, 0x4db565, 0x4daa5c, 0xa35b9a, 0xa364c5, 0xa3440d, 0x21903ec, 0x22721d8, 0xa11f5b, 0xa26058, 0x4704e1], } exit status 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 @ 01/20/23 04:42:55.584from junit.e2e_suite.xml
Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancyjump" already exists Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancynested" already exists > Enter [BeforeEach] [unmanaged] [functional] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:55 @ 01/20/23 04:42:31.083 < Exit [BeforeEach] [unmanaged] [functional] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:55 @ 01/20/23 04:42:31.083 (0s) > Enter [It] should create cluster with nested assumed role - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:161 @ 01/20/23 04:42:31.083 STEP: Node 14 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:187 @ 01/20/23 04:42:31.085 STEP: Node 14 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:216 @ 01/20/23 04:42:32.087 STEP: Creating a namespace for hosting the "functional-multitenancy-nested" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:52 @ 01/20/23 04:42:32.087 INFO: Creating namespace functional-multitenancy-nested-qsssq3 INFO: Creating event watcher for namespace "functional-multitenancy-nested-qsssq3" Jan 20 04:42:32.726: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_ROLE_ARN, value=arn:aws:iam::583069479333:role/CAPAMultiTenancySimple Jan 20 04:42:32.726: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_ROLE_NAME, value=CAPAMultiTenancySimple Jan 20 04:42:32.726: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_IDENTITY_NAME, value=capamultitenancysimple Jan 20 04:42:32.785: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_ROLE_ARN, value=arn:aws:iam::583069479333:role/CAPAMultiTenancyJump Jan 20 04:42:32.785: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_ROLE_NAME, value=CAPAMultiTenancyJump Jan 20 04:42:32.785: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_IDENTITY_NAME, value=capamultitenancyjump Jan 20 04:42:32.843: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_ROLE_ARN, value=arn:aws:iam::583069479333:role/CAPAMultiTenancyNested Jan 20 04:42:32.843: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_ROLE_NAME, value=CAPAMultiTenancyNested Jan 20 04:42:32.843: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_IDENTITY_NAME, value=capamultitenancynested STEP: Creating cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:171 @ 01/20/23 04:42:32.843 INFO: Creating the workload cluster with name "functional-multitenancy-nested-n6s56w" using the "nested-multitenancy" template (Kubernetes v1.25.3, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster functional-multitenancy-nested-n6s56w --infrastructure (default) --kubernetes-version v1.25.3 --control-plane-machine-count 1 --worker-machine-count 0 --flavor nested-multitenancy INFO: Applying the cluster template yaml to the cluster STEP: Dumping all the Cluster API resources in the "functional-multitenancy-nested-qsssq3" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:68 @ 01/20/23 04:42:43.984 STEP: Dumping all EC2 instances in the "functional-multitenancy-nested-qsssq3" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:72 @ 01/20/23 04:42:44.297 STEP: Deleting all clusters in the "functional-multitenancy-nested-qsssq3" namespace with intervals ["20m" "10s"] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:76 @ 01/20/23 04:42:44.43 STEP: Deleting cluster functional-multitenancy-nested-n6s56w - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/20/23 04:42:44.535 INFO: Waiting for the Cluster functional-multitenancy-nested-qsssq3/functional-multitenancy-nested-n6s56w to be deleted STEP: Waiting for cluster functional-multitenancy-nested-n6s56w to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/20/23 04:42:44.549 STEP: Deleting namespace used for hosting the "" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:82 @ 01/20/23 04:42:54.565 INFO: Deleting namespace functional-multitenancy-nested-qsssq3 STEP: Node 14 released resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:269 @ 01/20/23 04:42:55.583 [FAILED] Timed out after 10.001s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000549728>: { error: <*exec.ExitError | 0xc000658ce0>{ ProcessState: { pid: 32081, status: 256, rusage: { Utime: {Sec: 0, Usec: 618109}, Stime: {Sec: 0, Usec: 245719}, Maxrss: 103076, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 17363, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25136, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 2560, Nivcsw: 1744, }, }, Stderr: nil, }, stack: [0x1be46e0, 0x1be4c51, 0x1d5968c, 0x2190f13, 0x4db565, 0x4daa5c, 0xa35b9a, 0xa364c5, 0xa3440d, 0x21903ec, 0x22721d8, 0xa11f5b, 0xa26058, 0x4704e1], } exit status 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 @ 01/20/23 04:42:55.584 < Exit [It] should create cluster with nested assumed role - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:161 @ 01/20/23 04:42:55.584 (24.501s)
Filter through log files | View test history on testgrid
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - HA Control Plane Cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - HA control plane with scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - Single control plane with workers [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Clusterctl Upgrade Spec [from latest v1beta1 release to v1beta2] Should create a management cluster and then upgrade all the providers
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Pool Spec Should successfully create a cluster with machine pool machines
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Remediation Spec Should successfully trigger KCP remediation
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Remediation Spec Should successfully trigger machine deployment remediation
capa-e2e [It] [unmanaged] [Cluster API Framework] Self Hosted Spec Should pivot the bootstrap cluster to a self-hosted cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] Cluster Upgrade Spec - HA control plane with workers [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] ClusterClass Changes Spec - SSA immutability checks [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] Self Hosted Spec [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking] Running the quick-start spec Should create a workload cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking] Running the quick-start spec with ClusterClass Should create a workload cluster
capa-e2e [It] [unmanaged] [functional] CSI=external CCM=external AWSCSIMigration=on: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] CSI=external CCM=in-tree AWSCSIMigration=on: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] CSI=in-tree CCM=in-tree AWSCSIMigration=off: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] GPU-enabled cluster test should create cluster with single worker
capa-e2e [It] [unmanaged] [functional] MachineDeployment misconfigurations MachineDeployment misconfigurations
capa-e2e [It] [unmanaged] [functional] Workload cluster with AWS S3 and Ignition parameter It should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] Workload cluster with AWS SSM Parameter as the Secret Backend should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] Workload cluster with EFS driver should pass dynamic provisioning test
capa-e2e [It] [unmanaged] [functional] Workload cluster with spot instances should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Multitenancy test [ClusterClass] should create cluster with nested assumed role
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Workload cluster with AWS SSM Parameter as the Secret Backend [ClusterClass] should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Workload cluster with external infrastructure [ClusterClass] should create workload cluster in external VPC
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [It] [unmanaged] [functional] External infrastructure, external security groups, VPC peering, internal ELB and private subnet use only should create external clusters in peered VPC and with an internal ELB and only utilize a private subnet
capa-e2e [It] [unmanaged] [functional] Multiple workload clusters Defining clusters in the same namespace should create the clusters
capa-e2e [It] [unmanaged] [functional] Multiple workload clusters in different namespaces with machine failures should setup namespaces correctly for the two clusters
capa-e2e [It] [unmanaged] [functional] [Serial] Upgrade to main branch Kubernetes in same namespace should create the clusters
... skipping 893 lines ... Jan 20 04:42:31.194: INFO: Setting environment variable: key=AWS_AVAILABILITY_ZONE_2, value=us-west-2b Jan 20 04:42:31.194: INFO: Setting environment variable: key=AWS_REGION, value=us-west-2 Jan 20 04:42:31.194: INFO: Setting environment variable: key=AWS_SSH_KEY_NAME, value=cluster-api-provider-aws-sigs-k8s-io Jan 20 04:42:31.194: INFO: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=******* [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [24.501 seconds][0m [0m[unmanaged] [functional] [38;5;243mMultitenancy test [38;5;9m[1m[It] should create cluster with nested assumed role[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_test.go:161[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancyjump" already exists Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancynested" already exists [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 14 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/20/23 04:42:31.085[0m [1mSTEP:[0m Node 14 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/20/23 04:42:32.087[0m ... skipping 20 lines ... [1mSTEP:[0m Deleting cluster functional-multitenancy-nested-n6s56w [38;5;243m@ 01/20/23 04:42:44.535[0m INFO: Waiting for the Cluster functional-multitenancy-nested-qsssq3/functional-multitenancy-nested-n6s56w to be deleted [1mSTEP:[0m Waiting for cluster functional-multitenancy-nested-n6s56w to be deleted [38;5;243m@ 01/20/23 04:42:44.549[0m [1mSTEP:[0m Deleting namespace used for hosting the "" test spec [38;5;243m@ 01/20/23 04:42:54.565[0m INFO: Deleting namespace functional-multitenancy-nested-qsssq3 [1mSTEP:[0m Node 14 released resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/20/23 04:42:55.583[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 [38;5;243m@ 01/20/23 04:42:55.584[0m [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 10.001s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000549728>: { error: <*exec.ExitError | 0xc000658ce0>{ ProcessState: { pid: 32081, status: 256, rusage: { Utime: {Sec: 0, Usec: 618109}, Stime: {Sec: 0, Usec: 245719}, ... skipping 1052 lines ... awsmachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker-bootstraptemplate created cluster.cluster.x-k8s.io/self-hosted-h7jdmo created configmap/cni-self-hosted-h7jdmo-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/self-hosted-h7jdmo-crs-0 created W0120 04:57:16.740291 27100 reflector.go:347] pkg/mod/k8s.io/client-go@v0.25.0/tools/cache/reflector.go:169: watch of *v1.Event ended with: Internal error occurred: etcdserver: no leader [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 3 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/20/23 04:42:31.124[0m [1mSTEP:[0m Node 3 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/20/23 04:42:32.126[0m [1mSTEP:[0m Creating a namespace for hosting the "self-hosted" test spec [38;5;243m@ 01/20/23 04:42:32.126[0m ... skipping 274 lines ... machinedeployment.cluster.x-k8s.io/self-hosted-26hwzh-md-0 created awsmachinetemplate.infrastructure.cluster.x-k8s.io/self-hosted-26hwzh-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-26hwzh-md-0 created configmap/cni-self-hosted-26hwzh-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/self-hosted-26hwzh-crs-0 created W0120 05:07:21.263759 27099 reflector.go:347] pkg/mod/k8s.io/client-go@v0.25.0/tools/cache/reflector.go:169: watch of *v1.Event ended with: Internal error occurred: etcdserver: no leader I0120 05:07:35.496698 27099 trace.go:205] Trace[1207920314]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.25.0/tools/cache/reflector.go:169 (20-Jan-2023 05:07:22.115) (total time: 13381ms): Trace[1207920314]: ---"Objects listed" error:<nil> 13381ms (05:07:35.496) Trace[1207920314]: [13.381269001s] [13.381269001s] END [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 2 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/20/23 04:42:31.2[0m [1mSTEP:[0m Node 2 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/20/23 04:42:56.201[0m ... skipping 577 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.018 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0m[unmanaged] [functional] [38;5;243mMultitenancy test [38;5;9m[1m[It] should create cluster with nested assumed role[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308[0m [38;5;9m[1mRan 26 of 30 Specs in 3794.361 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m25 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m4 Pending[0m | [38;5;14m[1m0 Skipped[0m Ginkgo ran 1 suite in 1h4m53.065990313s Test Suite Failed real 64m53.140s user 21m16.462s sys 5m13.657s make: *** [Makefile:406: test-e2e] Error 1 + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ... skipping 3 lines ...