This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 0 succeeded
Started2022-11-21 12:44
Elapsed2m57s
Revisionmaster

Test Failures


kubetest2 Down 42s

exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Error lines from build-log.txt

... skipping 209 lines ...
I1121 12:45:31.466274    6258 local.go:42] ⚙️ kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1121 12:45:31.542391    6258 dumplogs.go:188] /tmp/kops.BrBB3PkNZ toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
I1121 12:45:31.542441    6258 local.go:42] ⚙️ /tmp/kops.BrBB3PkNZ toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
W1121 12:45:45.165635    6258 dumplogs.go:270] ControlPlane instance not found from kops toolbox dump
I1121 12:45:45.165834    6258 dumplogs.go:126] kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
I1121 12:45:45.165850    6258 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
W1121 12:45:45.253176    6258 dumplogs.go:132] Failed to get csinodes: exit status 1
I1121 12:45:45.253410    6258 dumplogs.go:126] kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
I1121 12:45:45.253425    6258 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
W1121 12:45:45.337126    6258 dumplogs.go:132] Failed to get csidrivers: exit status 1
I1121 12:45:45.337247    6258 dumplogs.go:126] kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
I1121 12:45:45.337261    6258 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
W1121 12:45:45.432552    6258 dumplogs.go:132] Failed to get storageclasses: exit status 1
I1121 12:45:45.432677    6258 dumplogs.go:126] kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
I1121 12:45:45.432692    6258 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
W1121 12:45:45.511967    6258 dumplogs.go:132] Failed to get persistentvolumes: exit status 1
I1121 12:45:45.512091    6258 dumplogs.go:126] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
I1121 12:45:45.512105    6258 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
W1121 12:45:45.595384    6258 dumplogs.go:132] Failed to get mutatingwebhookconfigurations: exit status 1
I1121 12:45:45.595549    6258 dumplogs.go:126] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
I1121 12:45:45.595575    6258 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
W1121 12:45:45.671408    6258 dumplogs.go:132] Failed to get validatingwebhookconfigurations: exit status 1
I1121 12:45:45.671464    6258 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name
W1121 12:45:45.752417    6258 down.go:34] Dumping cluster logs at the start of Down() failed: failed to get namespaces: exit status 1
I1121 12:45:45.752485    6258 down.go:48] /tmp/kops.BrBB3PkNZ delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I1121 12:45:45.752499    6258 local.go:42] ⚙️ /tmp/kops.BrBB3PkNZ delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I1121 12:45:45.772566    6384 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1121 12:45:45.772658    6384 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
No cloud resources to delete

error removing cluster from state store: refusing to delete: unknown file found: s3://k8s-kops-prow/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/cluster-completed.spec
I1121 12:45:59.550781    6258 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/11/21 12:45:59 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1121 12:45:59.563667    6258 http.go:37] curl https://ip.jsb.workers.dev
I1121 12:45:59.675056    6258 up.go:167] /tmp/kops.BrBB3PkNZ create cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --cloud aws --kubernetes-version 1.21.0 --ssh-public-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --override=cluster.spec.nodeTerminationHandler.enabled=true --admin-access 34.66.61.131/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-1a --master-size c5.large
I1121 12:45:59.675102    6258 local.go:42] ⚙️ /tmp/kops.BrBB3PkNZ create cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --cloud aws --kubernetes-version 1.21.0 --ssh-public-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --override=cluster.spec.nodeTerminationHandler.enabled=true --admin-access 34.66.61.131/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-1a --master-size c5.large
I1121 12:45:59.695617    6394 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1121 12:45:59.696568    6394 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1121 12:45:59.771070    6394 create_cluster.go:728] Using SSH public key: /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub

cluster "e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io" already exists; use 'kops update cluster' to apply changes
Error: exit status 1
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.BrBB3PkNZ --down
I1121 12:46:00.347697    6411 featureflag.go:160] FeatureFlag "SpecOverrideFlag"=true
I1121 12:46:00.351181    6411 app.go:61] The files in RunDir shall not be part of Artifacts
I1121 12:46:00.351220    6411 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I1121 12:46:00.351255    6411 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/14bcf49b-699a-11ed-bcbb-8a04251da9db"
... skipping 8 lines ...
I1121 12:46:15.517704    6411 local.go:42] ⚙️ kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1121 12:46:15.579337    6411 dumplogs.go:188] /tmp/kops.BrBB3PkNZ toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
I1121 12:46:15.579388    6411 local.go:42] ⚙️ /tmp/kops.BrBB3PkNZ toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
W1121 12:46:28.756680    6411 dumplogs.go:270] ControlPlane instance not found from kops toolbox dump
I1121 12:46:28.756782    6411 dumplogs.go:126] kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
I1121 12:46:28.756791    6411 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
W1121 12:46:28.823652    6411 dumplogs.go:132] Failed to get csinodes: exit status 1
I1121 12:46:28.823766    6411 dumplogs.go:126] kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
I1121 12:46:28.823778    6411 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
W1121 12:46:28.898214    6411 dumplogs.go:132] Failed to get csidrivers: exit status 1
I1121 12:46:28.898324    6411 dumplogs.go:126] kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
I1121 12:46:28.898336    6411 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
W1121 12:46:28.960512    6411 dumplogs.go:132] Failed to get storageclasses: exit status 1
I1121 12:46:28.960633    6411 dumplogs.go:126] kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
I1121 12:46:28.960645    6411 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
W1121 12:46:29.033851    6411 dumplogs.go:132] Failed to get persistentvolumes: exit status 1
I1121 12:46:29.033947    6411 dumplogs.go:126] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
I1121 12:46:29.033958    6411 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
W1121 12:46:29.094841    6411 dumplogs.go:132] Failed to get mutatingwebhookconfigurations: exit status 1
I1121 12:46:29.094942    6411 dumplogs.go:126] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
I1121 12:46:29.094953    6411 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
W1121 12:46:29.167809    6411 dumplogs.go:132] Failed to get validatingwebhookconfigurations: exit status 1
I1121 12:46:29.167865    6411 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name
W1121 12:46:29.239302    6411 down.go:34] Dumping cluster logs at the start of Down() failed: failed to get namespaces: exit status 1
I1121 12:46:29.239336    6411 down.go:48] /tmp/kops.BrBB3PkNZ delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I1121 12:46:29.239353    6411 local.go:42] ⚙️ /tmp/kops.BrBB3PkNZ delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I1121 12:46:29.257954    6531 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1121 12:46:29.258065    6531 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
No cloud resources to delete

error removing cluster from state store: refusing to delete: unknown file found: s3://k8s-kops-prow/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/cluster-completed.spec
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...