Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 2m50s |
Revision | master |
exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
... skipping 209 lines ... I1121 04:45:20.206378 6264 local.go:42] ⚙️ kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info I1121 04:45:20.267937 6264 dumplogs.go:188] /tmp/kops.cDu3r9wH1 toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml I1121 04:45:20.267981 6264 local.go:42] ⚙️ /tmp/kops.cDu3r9wH1 toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml W1121 04:45:33.882249 6264 dumplogs.go:270] ControlPlane instance not found from kops toolbox dump I1121 04:45:33.882425 6264 dumplogs.go:126] kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml I1121 04:45:33.882444 6264 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml W1121 04:45:33.945802 6264 dumplogs.go:132] Failed to get csinodes: exit status 1 I1121 04:45:33.945918 6264 dumplogs.go:126] kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml I1121 04:45:33.945932 6264 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml W1121 04:45:34.006235 6264 dumplogs.go:132] Failed to get csidrivers: exit status 1 I1121 04:45:34.006349 6264 dumplogs.go:126] kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml I1121 04:45:34.006360 6264 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml W1121 04:45:34.065110 6264 dumplogs.go:132] Failed to get storageclasses: exit status 1 I1121 04:45:34.065245 6264 dumplogs.go:126] kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml I1121 04:45:34.065256 6264 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml W1121 04:45:34.129358 6264 dumplogs.go:132] Failed to get persistentvolumes: exit status 1 I1121 04:45:34.129499 6264 dumplogs.go:126] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml I1121 04:45:34.129511 6264 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml W1121 04:45:34.189555 6264 dumplogs.go:132] Failed to get mutatingwebhookconfigurations: exit status 1 I1121 04:45:34.189653 6264 dumplogs.go:126] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml I1121 04:45:34.189663 6264 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml W1121 04:45:34.245810 6264 dumplogs.go:132] Failed to get validatingwebhookconfigurations: exit status 1 I1121 04:45:34.245856 6264 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name W1121 04:45:34.302237 6264 down.go:34] Dumping cluster logs at the start of Down() failed: failed to get namespaces: exit status 1 I1121 04:45:34.302273 6264 down.go:48] /tmp/kops.cDu3r9wH1 delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes I1121 04:45:34.302289 6264 local.go:42] ⚙️ /tmp/kops.cDu3r9wH1 delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes I1121 04:45:34.318171 6384 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true I1121 04:45:34.318260 6384 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true No cloud resources to delete error removing cluster from state store: refusing to delete: unknown file found: s3://k8s-kops-prow/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/cluster-completed.spec I1121 04:45:48.264260 6264 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip 2022/11/21 04:45:48 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404 I1121 04:45:48.278397 6264 http.go:37] curl https://ip.jsb.workers.dev I1121 04:45:48.455538 6264 up.go:167] /tmp/kops.cDu3r9wH1 create cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --cloud aws --kubernetes-version 1.21.0 --ssh-public-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --override=cluster.spec.nodeTerminationHandler.enabled=true --admin-access 35.224.73.140/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large I1121 04:45:48.455596 6264 local.go:42] ⚙️ /tmp/kops.cDu3r9wH1 create cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --cloud aws --kubernetes-version 1.21.0 --ssh-public-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub --override cluster.spec.nodePortAccess=0.0.0.0/0 --override=cluster.spec.nodeTerminationHandler.enabled=true --admin-access 35.224.73.140/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large I1121 04:45:48.470886 6394 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true I1121 04:45:48.470961 6394 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true I1121 04:45:48.515413 6394 create_cluster.go:728] Using SSH public key: /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519.pub cluster "e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io" already exists; use 'kops update cluster' to apply changes Error: exit status 1 + kops-finish + kubetest2 kops -v=2 --cloud-provider=aws --cluster-name=e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --kops-root=/home/prow/go/src/k8s.io/kops --admin-access= --env=KOPS_FEATURE_FLAGS=SpecOverrideFlag --kops-binary-path=/tmp/kops.cDu3r9wH1 --down I1121 04:45:49.074350 6413 featureflag.go:160] FeatureFlag "SpecOverrideFlag"=true I1121 04:45:49.076015 6413 app.go:61] The files in RunDir shall not be part of Artifacts I1121 04:45:49.076041 6413 app.go:62] pass rundir-in-artifacts flag True for RunDir to be part of Artifacts I1121 04:45:49.076069 6413 app.go:64] RunDir for this run: "/home/prow/go/src/k8s.io/kops/_rundir/064b9dd4-6957-11ed-bcbb-8a04251da9db" ... skipping 8 lines ... I1121 04:46:04.341471 6413 local.go:42] ⚙️ kubectl cluster-info dump --all-namespaces -o yaml --output-directory /logs/artifacts/cluster-info I1121 04:46:04.398482 6413 dumplogs.go:188] /tmp/kops.cDu3r9wH1 toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml I1121 04:46:04.398521 6413 local.go:42] ⚙️ /tmp/kops.cDu3r9wH1 toolbox dump --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --private-key /tmp/kops/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/id_ed25519 --ssh-user ubuntu -o yaml W1121 04:46:17.737850 6413 dumplogs.go:270] ControlPlane instance not found from kops toolbox dump I1121 04:46:17.737966 6413 dumplogs.go:126] kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml I1121 04:46:17.737977 6413 local.go:42] ⚙️ kubectl --request-timeout 5s get csinodes --all-namespaces -o yaml W1121 04:46:17.796761 6413 dumplogs.go:132] Failed to get csinodes: exit status 1 I1121 04:46:17.796871 6413 dumplogs.go:126] kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml I1121 04:46:17.796882 6413 local.go:42] ⚙️ kubectl --request-timeout 5s get csidrivers --all-namespaces -o yaml W1121 04:46:17.856933 6413 dumplogs.go:132] Failed to get csidrivers: exit status 1 I1121 04:46:17.857055 6413 dumplogs.go:126] kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml I1121 04:46:17.857068 6413 local.go:42] ⚙️ kubectl --request-timeout 5s get storageclasses --all-namespaces -o yaml W1121 04:46:17.916599 6413 dumplogs.go:132] Failed to get storageclasses: exit status 1 I1121 04:46:17.916693 6413 dumplogs.go:126] kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml I1121 04:46:17.916703 6413 local.go:42] ⚙️ kubectl --request-timeout 5s get persistentvolumes --all-namespaces -o yaml W1121 04:46:17.972465 6413 dumplogs.go:132] Failed to get persistentvolumes: exit status 1 I1121 04:46:17.972568 6413 dumplogs.go:126] kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml I1121 04:46:17.972579 6413 local.go:42] ⚙️ kubectl --request-timeout 5s get mutatingwebhookconfigurations --all-namespaces -o yaml W1121 04:46:18.029579 6413 dumplogs.go:132] Failed to get mutatingwebhookconfigurations: exit status 1 I1121 04:46:18.029684 6413 dumplogs.go:126] kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml I1121 04:46:18.029913 6413 local.go:42] ⚙️ kubectl --request-timeout 5s get validatingwebhookconfigurations --all-namespaces -o yaml W1121 04:46:18.091553 6413 dumplogs.go:132] Failed to get validatingwebhookconfigurations: exit status 1 I1121 04:46:18.091593 6413 local.go:42] ⚙️ kubectl --request-timeout 5s get namespaces --no-headers -o custom-columns=name:.metadata.name W1121 04:46:18.150050 6413 down.go:34] Dumping cluster logs at the start of Down() failed: failed to get namespaces: exit status 1 I1121 04:46:18.150078 6413 down.go:48] /tmp/kops.cDu3r9wH1 delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes I1121 04:46:18.150089 6413 local.go:42] ⚙️ /tmp/kops.cDu3r9wH1 delete cluster --name e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io --yes I1121 04:46:18.167636 6537 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true I1121 04:46:18.167823 6537 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true No cloud resources to delete error removing cluster from state store: refusing to delete: unknown file found: s3://k8s-kops-prow/e2e-c0d41e2af2-13250.test-cncf-aws.k8s.io/cluster-completed.spec Error: exit status 1 + echo 'kubetest2 down failed' kubetest2 down failed + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ... skipping 3 lines ...