This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 0 succeeded
Started2022-11-20 04:44
Elapsed2m49s
Revisionmaster

Test Failures


kubetest2 Down 43s

exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Error lines from build-log.txt

... skipping 209 lines ...
I1120 04:45:18.716698    6245 local.go:42] ⚙️ kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1120 04:45:19.110142    6245 dumplogs.go:188] /tmp/kops.kpxolIdLt toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
I1120 04:45:19.110208    6245 local.go:42] ⚙️ /tmp/kops.kpxolIdLt toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
W1120 04:45:31.955999    6245 dumplogs.go:270] ControlPlane instance not found from kops toolbox dump
I1120 04:45:31.956237    6245 dumplogs.go:126] kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
I1120 04:45:31.956269    6245 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
W1120 04:45:32.024063    6245 dumplogs.go:132] Failed to get csinodes: exit status 1
I1120 04:45:32.024465    6245 dumplogs.go:126] kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
I1120 04:45:32.024559    6245 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
W1120 04:45:32.093750    6245 dumplogs.go:132] Failed to get csidrivers: exit status 1
I1120 04:45:32.093859    6245 dumplogs.go:126] kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
I1120 04:45:32.093870    6245 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
W1120 04:45:32.156240    6245 dumplogs.go:132] Failed to get storageclasses: exit status 1
I1120 04:45:32.156346    6245 dumplogs.go:126] kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
I1120 04:45:32.156364    6245 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
W1120 04:45:32.215765    6245 dumplogs.go:132] Failed to get persistentvolumes: exit status 1
I1120 04:45:32.215883    6245 dumplogs.go:126] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
I1120 04:45:32.215896    6245 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
W1120 04:45:32.273865    6245 dumplogs.go:132] Failed to get mutatingwebhookconfigurations: exit status 1
I1120 04:45:32.273996    6245 dumplogs.go:126] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
I1120 04:45:32.274014    6245 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
W1120 04:45:32.336319    6245 dumplogs.go:132] Failed to get validatingwebhookconfigurations: exit status 1
I1120 04:45:32.336375    6245 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name
W1120 04:45:32.395281    6245 down.go:34] Dumping cluster logs at the start of Down() failed: failed to get namespaces: exit status 1
I1120 04:45:32.395314    6245 down.go:48] /tmp/kops.kpxolIdLt delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I1120 04:45:32.395326    6245 local.go:42] ⚙️ /tmp/kops.kpxolIdLt delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I1120 04:45:32.410655    6371 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1120 04:45:32.410751    6371 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
No cloud resources to delete

error removing cluster from state store: refusing to delete: unknown file found: s3://k8s-kops-prow/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/cluster-completed.spec
I1120 04:45:46.617606    6245 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2022/11/20 04:45:46 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1120 04:45:46.632958    6245 http.go:37] curl https://ip.jsb.workers.dev
I1120 04:45:46.744455    6245 up.go:167] /tmp/kops.kpxolIdLt create cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --cloud aws --kubernetes-version 1.21.0 --ssh-public-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --override=cluster.spec.nodeTerminationHandler.enabled=true --admin-access 34.67.219.232/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-2a --master-size c5.large
I1120 04:45:46.744500    6245 local.go:42] ⚙️ /tmp/kops.kpxolIdLt create cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --cloud aws --kubernetes-version 1.21.0 --ssh-public-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --override=cluster.spec.nodeTerminationHandler.enabled=true --admin-access 34.67.219.232/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-2a --master-size c5.large
I1120 04:45:46.759190    6381 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1120 04:45:46.759380    6381 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1120 04:45:46.801433    6381 create_cluster.go:728] Using SSH public key: /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub

cluster "e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io" already exists; use 'kops update cluster' to apply changes
Error: exit status 1
+ kops-finish
+ kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.kpxolIdLt --down
I1120 04:45:47.307702    6398 featureflag.go:160] FeatureFlag "SpecOverrideFlag"=true
I1120 04:45:47.310710    6398 app.go:61] The files in RunDir shall not be part of Artifacts
I1120 04:45:47.310749    6398 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts
I1120 04:45:47.310806    6398 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/db5e84d4-688d-11ed-bcbb-8a04251da9db"
... skipping 8 lines ...
I1120 04:46:02.354956    6398 local.go:42] ⚙️ kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info
I1120 04:46:02.412838    6398 dumplogs.go:188] /tmp/kops.kpxolIdLt toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
I1120 04:46:02.412883    6398 local.go:42] ⚙️ /tmp/kops.kpxolIdLt toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml
W1120 04:46:15.757636    6398 dumplogs.go:270] ControlPlane instance not found from kops toolbox dump
I1120 04:46:15.757745    6398 dumplogs.go:126] kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
I1120 04:46:15.757756    6398 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml
W1120 04:46:15.816248    6398 dumplogs.go:132] Failed to get csinodes: exit status 1
I1120 04:46:15.816346    6398 dumplogs.go:126] kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
I1120 04:46:15.816356    6398 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml
W1120 04:46:15.875280    6398 dumplogs.go:132] Failed to get csidrivers: exit status 1
I1120 04:46:15.875387    6398 dumplogs.go:126] kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
I1120 04:46:15.875397    6398 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml
W1120 04:46:15.935905    6398 dumplogs.go:132] Failed to get storageclasses: exit status 1
I1120 04:46:15.936005    6398 dumplogs.go:126] kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
I1120 04:46:15.936015    6398 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml
W1120 04:46:15.998191    6398 dumplogs.go:132] Failed to get persistentvolumes: exit status 1
I1120 04:46:15.998306    6398 dumplogs.go:126] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
I1120 04:46:15.998317    6398 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml
W1120 04:46:16.059336    6398 dumplogs.go:132] Failed to get mutatingwebhookconfigurations: exit status 1
I1120 04:46:16.059707    6398 dumplogs.go:126] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
I1120 04:46:16.059729    6398 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml
W1120 04:46:16.123302    6398 dumplogs.go:132] Failed to get validatingwebhookconfigurations: exit status 1
I1120 04:46:16.123348    6398 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name
W1120 04:46:16.183708    6398 down.go:34] Dumping cluster logs at the start of Down() failed: failed to get namespaces: exit status 1
I1120 04:46:16.183741    6398 down.go:48] /tmp/kops.kpxolIdLt delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I1120 04:46:16.183751    6398 local.go:42] ⚙️ /tmp/kops.kpxolIdLt delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes
I1120 04:46:16.199683    6522 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1120 04:46:16.199789    6522 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
No cloud resources to delete

error removing cluster from state store: refusing to delete: unknown file found: s3://k8s-kops-prow/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/cluster-completed.spec
Error: exit status 1
+ echo 'kubetest2 down failed'
kubetest2 down failed
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...