Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 65 succeeded |
Started | |
Elapsed | 1h0m |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capa\-e2e\s\[It\]\s\[unmanaged\]\s\[functional\]\s\[ClusterClass\]\sMultitenancy\stest\s\[ClusterClass\]\sshould\screate\scluster\swith\snested\sassumed\srole$'
[FAILED] Timed out after 10.001s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000b39c80>: { error: <*exec.ExitError | 0xc000672500>{ ProcessState: { pid: 32343, status: 256, rusage: { Utime: {Sec: 0, Usec: 611477}, Stime: {Sec: 0, Usec: 249132}, Maxrss: 110016, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 13542, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25136, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 2377, Nivcsw: 614, }, }, Stderr: nil, }, stack: [0x1be5460, 0x1be59d1, 0x1d5a40c, 0x2191c93, 0x4db565, 0x4daa5c, 0xa35e9a, 0xa36b0e, 0xa344cd, 0x219116c, 0x22658f8, 0xa11f5b, 0xa26058, 0x4704e1], } exit status 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 @ 01/29/23 04:47:38.216from junit.e2e_suite.xml
Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancyjump" already exists Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancynested" already exists > Enter [BeforeEach] [unmanaged] [functional] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:47 @ 01/29/23 04:47:12.555 < Exit [BeforeEach] [unmanaged] [functional] [ClusterClass] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:47 @ 01/29/23 04:47:12.555 (0s) > Enter [It] should create cluster with nested assumed role - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:53 @ 01/29/23 04:47:12.555 STEP: Node 13 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:187 @ 01/29/23 04:47:12.567 STEP: Node 13 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:216 @ 01/29/23 04:47:13.568 STEP: Creating a namespace for hosting the "functional-multitenancy-nested-clusterclass" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:52 @ 01/29/23 04:47:13.568 INFO: Creating namespace functional-multitenancy-nested-clusterclass-r21cia INFO: Creating event watcher for namespace "functional-multitenancy-nested-clusterclass-r21cia" Jan 29 04:47:13.979: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_ROLE_ARN, value=arn:aws:iam::036221693407:role/CAPAMultiTenancySimple Jan 29 04:47:13.979: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_ROLE_NAME, value=CAPAMultiTenancySimple Jan 29 04:47:13.979: INFO: Setting environment variable: key=MULTI_TENANCY_SIMPLE_IDENTITY_NAME, value=capamultitenancysimple Jan 29 04:47:14.041: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_ROLE_ARN, value=arn:aws:iam::036221693407:role/CAPAMultiTenancyJump Jan 29 04:47:14.041: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_ROLE_NAME, value=CAPAMultiTenancyJump Jan 29 04:47:14.041: INFO: Setting environment variable: key=MULTI_TENANCY_JUMP_IDENTITY_NAME, value=capamultitenancyjump Jan 29 04:47:14.099: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_ROLE_ARN, value=arn:aws:iam::036221693407:role/CAPAMultiTenancyNested Jan 29 04:47:14.099: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_ROLE_NAME, value=CAPAMultiTenancyNested Jan 29 04:47:14.099: INFO: Setting environment variable: key=MULTI_TENANCY_NESTED_IDENTITY_NAME, value=capamultitenancynested STEP: Creating cluster - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:64 @ 01/29/23 04:47:14.099 INFO: Creating the workload cluster with name "cluster-8z0xiw" using the "nested-multitenancy-clusterclass" template (Kubernetes v1.25.3, 1 control-plane machines, 0 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster cluster-8z0xiw --infrastructure (default) --kubernetes-version v1.25.3 --control-plane-machine-count 1 --worker-machine-count 0 --flavor nested-multitenancy-clusterclass INFO: Applying the cluster template yaml to the cluster STEP: Dumping all the Cluster API resources in the "functional-multitenancy-nested-clusterclass-r21cia" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:68 @ 01/29/23 04:47:26.334 STEP: Dumping all EC2 instances in the "functional-multitenancy-nested-clusterclass-r21cia" namespace - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:72 @ 01/29/23 04:47:26.979 STEP: Deleting all clusters in the "functional-multitenancy-nested-clusterclass-r21cia" namespace with intervals ["20m" "10s"] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:76 @ 01/29/23 04:47:27.141 STEP: Deleting cluster cluster-8z0xiw - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/29/23 04:47:27.167 INFO: Waiting for the Cluster functional-multitenancy-nested-clusterclass-r21cia/cluster-8z0xiw to be deleted STEP: Waiting for cluster cluster-8z0xiw to be deleted - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/ginkgoextensions/output.go:35 @ 01/29/23 04:47:27.185 STEP: Deleting namespace used for hosting the "" test spec - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/common.go:82 @ 01/29/23 04:47:37.196 INFO: Deleting namespace functional-multitenancy-nested-clusterclass-r21cia STEP: Node 13 released resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/shared/resource.go:269 @ 01/29/23 04:47:38.216 [FAILED] Timed out after 10.001s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000b39c80>: { error: <*exec.ExitError | 0xc000672500>{ ProcessState: { pid: 32343, status: 256, rusage: { Utime: {Sec: 0, Usec: 611477}, Stime: {Sec: 0, Usec: 249132}, Maxrss: 110016, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 13542, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 25136, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 2377, Nivcsw: 614, }, }, Stderr: nil, }, stack: [0x1be5460, 0x1be59d1, 0x1d5a40c, 0x2191c93, 0x4db565, 0x4daa5c, 0xa35e9a, 0xa36b0e, 0xa344cd, 0x219116c, 0x22658f8, 0xa11f5b, 0xa26058, 0x4704e1], } exit status 1 In [It] at: /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 @ 01/29/23 04:47:38.216 < Exit [It] should create cluster with nested assumed role - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:53 @ 01/29/23 04:47:38.216 (25.661s)
Filter through log files | View test history on testgrid
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - HA Control Plane Cluster [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - HA control plane with scale in rollout [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Cluster Upgrade Spec - Single control plane with workers [K8s-Upgrade] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] Clusterctl Upgrade Spec [from latest v1beta1 release to v1beta2] Should create a management cluster and then upgrade all the providers
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Pool Spec Should successfully create a cluster with machine pool machines
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Remediation Spec Should successfully trigger KCP remediation
capa-e2e [It] [unmanaged] [Cluster API Framework] Machine Remediation Spec Should successfully trigger machine deployment remediation
capa-e2e [It] [unmanaged] [Cluster API Framework] Self Hosted Spec Should pivot the bootstrap cluster to a self-hosted cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] Cluster Upgrade Spec - HA control plane with workers [K8s-Upgrade] [ClusterClass] Should create and upgrade a workload cluster and eventually run kubetest
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] ClusterClass Changes Spec - SSA immutability checks [ClusterClass] Should successfully rollout the managed topology upon changes to the ClusterClass
capa-e2e [It] [unmanaged] [Cluster API Framework] [ClusterClass] Self Hosted Spec [ClusterClass] Should pivot the bootstrap cluster to a self-hosted cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking] Running the quick-start spec Should create a workload cluster
capa-e2e [It] [unmanaged] [Cluster API Framework] [smoke] [PR-Blocking] Running the quick-start spec with ClusterClass Should create a workload cluster
capa-e2e [It] [unmanaged] [functional] CSI=external CCM=external AWSCSIMigration=on: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] CSI=external CCM=in-tree AWSCSIMigration=on: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] CSI=in-tree CCM=in-tree AWSCSIMigration=off: upgrade to v1.23 should create volumes dynamically with external cloud provider
capa-e2e [It] [unmanaged] [functional] GPU-enabled cluster test should create cluster with single worker
capa-e2e [It] [unmanaged] [functional] MachineDeployment misconfigurations MachineDeployment misconfigurations
capa-e2e [It] [unmanaged] [functional] Multitenancy test should create cluster with nested assumed role
capa-e2e [It] [unmanaged] [functional] Workload cluster with AWS S3 and Ignition parameter It should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] Workload cluster with AWS SSM Parameter as the Secret Backend should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] Workload cluster with EFS driver should pass dynamic provisioning test
capa-e2e [It] [unmanaged] [functional] Workload cluster with spot instances should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Workload cluster with AWS SSM Parameter as the Secret Backend [ClusterClass] should be creatable and deletable
capa-e2e [It] [unmanaged] [functional] [ClusterClass] Workload cluster with external infrastructure [ClusterClass] should create workload cluster in external VPC
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedAfterSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [SynchronizedBeforeSuite]
capa-e2e [It] [unmanaged] [functional] External infrastructure, external security groups, VPC peering, internal ELB and private subnet use only should create external clusters in peered VPC and with an internal ELB and only utilize a private subnet
capa-e2e [It] [unmanaged] [functional] Multiple workload clusters Defining clusters in the same namespace should create the clusters
capa-e2e [It] [unmanaged] [functional] Multiple workload clusters in different namespaces with machine failures should setup namespaces correctly for the two clusters
capa-e2e [It] [unmanaged] [functional] [Serial] Upgrade to main branch Kubernetes in same namespace should create the clusters
... skipping 878 lines ... Jan 29 04:47:12.578: INFO: Setting environment variable: key=AWS_AVAILABILITY_ZONE_2, value=us-west-2b Jan 29 04:47:12.578: INFO: Setting environment variable: key=AWS_REGION, value=us-west-2 Jan 29 04:47:12.578: INFO: Setting environment variable: key=AWS_SSH_KEY_NAME, value=cluster-api-provider-aws-sigs-k8s-io Jan 29 04:47:12.578: INFO: Setting environment variable: key=AWS_B64ENCODED_CREDENTIALS, value=******* [38;5;243m<< Timeline[0m [38;5;243m------------------------------[0m [38;5;9m• [FAILED] [25.661 seconds][0m [0m[unmanaged] [functional] [ClusterClass] [38;5;243mMultitenancy test [ClusterClass] [38;5;9m[1m[It] should create cluster with nested assumed role[0m [38;5;243m/home/prow/go/src/sigs.k8s.io/cluster-api-provider-aws/test/e2e/suites/unmanaged/unmanaged_functional_clusterclass_test.go:53[0m [38;5;243mCaptured StdOut/StdErr Output >>[0m Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancyjump" already exists Error from server (AlreadyExists): error when creating "STDIN": awsclusterroleidentities.infrastructure.cluster.x-k8s.io "capamultitenancynested" already exists [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 13 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/29/23 04:47:12.567[0m [1mSTEP:[0m Node 13 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/29/23 04:47:13.568[0m ... skipping 20 lines ... [1mSTEP:[0m Deleting cluster cluster-8z0xiw [38;5;243m@ 01/29/23 04:47:27.167[0m INFO: Waiting for the Cluster functional-multitenancy-nested-clusterclass-r21cia/cluster-8z0xiw to be deleted [1mSTEP:[0m Waiting for cluster cluster-8z0xiw to be deleted [38;5;243m@ 01/29/23 04:47:27.185[0m [1mSTEP:[0m Deleting namespace used for hosting the "" test spec [38;5;243m@ 01/29/23 04:47:37.196[0m INFO: Deleting namespace functional-multitenancy-nested-clusterclass-r21cia [1mSTEP:[0m Node 13 released resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/29/23 04:47:38.216[0m [38;5;9m[FAILED][0m in [It] - /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308 [38;5;243m@ 01/29/23 04:47:38.216[0m [38;5;243m<< Timeline[0m [38;5;9m[FAILED] Timed out after 10.001s. Failed to apply the cluster template Expected success, but got an error: <*errors.withStack | 0xc000b39c80>: { error: <*exec.ExitError | 0xc000672500>{ ProcessState: { pid: 32343, status: 256, rusage: { Utime: {Sec: 0, Usec: 611477}, Stime: {Sec: 0, Usec: 249132}, ... skipping 1051 lines ... awsmachinetemplate.infrastructure.cluster.x-k8s.io/ci-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ci-default-worker-bootstraptemplate created cluster.cluster.x-k8s.io/self-hosted-iy8rlp created configmap/cni-self-hosted-iy8rlp-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/self-hosted-iy8rlp-crs-0 created W0129 05:03:55.386054 27278 reflector.go:347] pkg/mod/k8s.io/client-go@v0.25.0/tools/cache/reflector.go:169: watch of *v1.Event ended with: Internal error occurred: etcdserver: no leader [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 5 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/29/23 04:47:12.554[0m [1mSTEP:[0m Node 5 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/29/23 04:47:13.555[0m [1mSTEP:[0m Creating a namespace for hosting the "self-hosted" test spec [38;5;243m@ 01/29/23 04:47:13.556[0m ... skipping 142 lines ... machinedeployment.cluster.x-k8s.io/self-hosted-o5g7xe-md-0 created awsmachinetemplate.infrastructure.cluster.x-k8s.io/self-hosted-o5g7xe-md-0 created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/self-hosted-o5g7xe-md-0 created configmap/cni-self-hosted-o5g7xe-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/self-hosted-o5g7xe-crs-0 created W0129 05:04:45.677501 27355 reflector.go:347] pkg/mod/k8s.io/client-go@v0.25.0/tools/cache/reflector.go:169: watch of *v1.Event ended with: Internal error occurred: etcdserver: no leader [38;5;243m<< Captured StdOut/StdErr Output[0m [38;5;243mTimeline >>[0m [1mSTEP:[0m Node 13 acquiring resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/29/23 04:47:38.221[0m [1mSTEP:[0m Node 13 acquired resources: {ec2-normal:0, vpc:1, eip:1, ngw:1, igw:1, classiclb:1, ec2-GPU:0, volume-gp2:0} [38;5;243m@ 01/29/23 04:47:39.222[0m [1mSTEP:[0m Creating a namespace for hosting the "self-hosted" test spec [38;5;243m@ 01/29/23 04:47:39.222[0m ... skipping 708 lines ... [38;5;10m[ReportAfterSuite] PASSED [0.019 seconds][0m [38;5;10m[1m[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report[0m [38;5;243mautogenerated by Ginkgo[0m [38;5;243m------------------------------[0m [38;5;9m[1mSummarizing 1 Failure:[0m [38;5;9m[FAIL][0m [0m[unmanaged] [functional] [ClusterClass] [38;5;243mMultitenancy test [ClusterClass] [38;5;9m[1m[It] should create cluster with nested assumed role[0m [38;5;243m/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.3.1/framework/clusterctl/clusterctl_helpers.go:308[0m [38;5;9m[1mRan 26 of 30 Specs in 3357.519 seconds[0m [38;5;9m[1mFAIL![0m -- [38;5;10m[1m25 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m4 Pending[0m | [38;5;14m[1m0 Skipped[0m Ginkgo ran 1 suite in 58m11.090500769s Test Suite Failed real 58m11.158s user 23m23.489s sys 6m28.561s make: *** [Makefile:406: test-e2e] Error 1 + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ... skipping 3 lines ...