Recent runs || View in Spyglass
PR | smarterclayton: kubelet: Force deleted pods can fail to move out of terminating |
Result | FAILURE |
Tests | 1 failed / 9 succeeded |
Started | |
Elapsed | 22m28s |
Revision | f8ed653518811ade986c1543ff480358db3a6f44 |
Refs |
113145 |
job-version | v1.27.0-alpha.2.462+43e45f646bb8b2 |
kubetest-version | v20230222-b5208facd4 |
revision | v1.27.0-alpha.2.462+43e45f646bb8b2 |
error during ./hack/e2e-internal/e2e-up.sh: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Build
kubetest Deferred TearDown
kubetest DumpClusterLogs (--up failed)
kubetest Extract
kubetest GetDeployer
kubetest Prepare
kubetest Stage
kubetest TearDown Previous
kubetest Timeout
... skipping 67 lines ... HOSTNAME=51f9a92f-b91e-11ed-ba53-ea13d4c51788 IMAGE=gcr.io/k8s-staging-test-infra/kubekins-e2e:v20230222-b5208facd4-master INSTANCE_PREFIX=e2e-113145-95a39 JENKINS_GCE_SSH_PRIVATE_KEY_FILE=/workspace/.ssh/google_compute_engine JENKINS_GCE_SSH_PUBLIC_KEY_FILE=/workspace/.ssh/google_compute_engine.pub JOB_NAME=pull-kubernetes-e2e-gce-100-performance JOB_SPEC={"type":"presubmit","job":"pull-kubernetes-e2e-gce-100-performance","buildid":"1631343366891376640","prowjobid":"51f9a92f-b91e-11ed-ba53-ea13d4c51788","refs":{"org":"kubernetes","repo":"kubernetes","repo_link":"https://github.com/kubernetes/kubernetes","base_ref":"master","base_sha":"efe20f6c9b54fbe36b4ff4c47d7d0bc857699b5e","base_link":"https://github.com/kubernetes/kubernetes/commit/efe20f6c9b54fbe36b4ff4c47d7d0bc857699b5e","pulls":[{"number":113145,"author":"smarterclayton","sha":"f8ed653518811ade986c1543ff480358db3a6f44","title":"kubelet: Force deleted pods can fail to move out of terminating","link":"https://github.com/kubernetes/kubernetes/pull/113145","commit_link":"https://github.com/kubernetes/kubernetes/pull/113145/commits/f8ed653518811ade986c1543ff480358db3a6f44","author_link":"https://github.com/smarterclayton"}],"path_alias":"k8s.io/kubernetes"},"extra_refs":[{"org":"kubernetes","repo":"perf-tests","base_ref":"master","path_alias":"k8s.io/perf-tests"},{"org":"kubernetes","repo":"release","base_ref":"master","path_alias":"k8s.io/release"}],"decoration_config":{"timeout":"2h0m0s","grace_period":"15m0s","utility_images":{"clonerefs":"gcr.io/k8s-prow/clonerefs:v20230301-6893d98ee8","initupload":"gcr.io/k8s-prow/initupload:v20230301-6893d98ee8","entrypoint":"gcr.io/k8s-prow/entrypoint:v20230301-6893d98ee8","sidecar":"gcr.io/k8s-prow/sidecar:v20230301-6893d98ee8"},"resources":{"clonerefs":{"requests":{"cpu":"100m"}},"initupload":{"requests":{"cpu":"100m"}},"place_entrypoint":{"requests":{"cpu":"100m"}},"sidecar":{"requests":{"cpu":"100m"}}},"gcs_configuration":{"bucket":"kubernetes-jenkins","path_strategy":"legacy","default_org":"kubernetes","default_repo":"kubernetes"},"gcs_credentials_secret":"service-account"}} JOB_TYPE=presubmit KUBECTL_PRUNE_WHITELIST_OVERRIDE=core/v1/ConfigMap core/v1/Endpoints core/v1/Namespace core/v1/PersistentVolumeClaim core/v1/PersistentVolume core/v1/ReplicationController core/v1/Secret core/v1/Service batch/v1/Job batch/v1/CronJob apps/v1/DaemonSet apps/v1/Deployment apps/v1/ReplicaSet apps/v1/StatefulSet networking.k8s.io/v1/Ingress KUBELET_TEST_ARGS=--enable-debugging-handlers --kube-api-qps=100 --kube-api-burst=100 KUBEMARK_APISERVER_TEST_ARGS=--max-requests-inflight=80 --max-mutating-requests-inflight=0 --profiling --contention-profiling KUBEPROXY_TEST_ARGS=--profiling --metrics-bind-address=0.0.0.0 --feature-gates=MinimizeIPTablesRestore=true KUBERNETES_PORT=tcp://10.35.240.1:443 ... skipping 471 lines ... Network Project: k8s-infra-e2e-boskos-scale-28 Zone: us-east1-b Dumping logs temporarily to '/tmp/tmp.7Nrcj8fbUx/logs'. Will upload to 'gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1631343366891376640' later. Dumping logs from master locally to '/tmp/tmp.7Nrcj8fbUx/logs' Trying to find master named 'e2e-113145-95a39-master' Looking for address 'e2e-113145-95a39-master-ip' ERROR: (gcloud.compute.addresses.describe) Could not fetch resource: - The resource 'projects/k8s-infra-e2e-boskos-scale-28/regions/us-east1/addresses/e2e-113145-95a39-master-ip' was not found Could not detect Kubernetes master node. Make sure you've launched a cluster with 'kube-up.sh' Master not detected. Is the cluster up? Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1631343366891376640' using logexporter The connection to the server localhost:8080 was refused - did you specify the right host or port? Failed to create logexporter daemonset.. falling back to logdump through SSH E0302 17:40:47.461893 77804 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E0302 17:40:47.462490 77804 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E0302 17:40:47.464090 77804 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E0302 17:40:47.465867 77804 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E0302 17:40:47.467459 77804 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused The connection to the server localhost:8080 was refused - did you specify the right host or port? Dumping logs for nodes provided as args to dump_nodes() function Changing logfiles to be world-readable for download ERROR: (gcloud.compute.ssh) Could not fetch resource: - The resource 'projects/k8s-infra-e2e-boskos-scale-28/zones/us-east1-b/instances/e2e-113145-95a39-minion-heapster' was not found ERROR: (gcloud.compute.ssh) Could not fetch resource: - The resource 'projects/k8s-infra-e2e-boskos-scale-28/zones/us-east1-b/instances/e2e-113145-95a39-minion-heapster' was not found ERROR: (gcloud.compute.ssh) Could not fetch resource: - The resource 'projects/k8s-infra-e2e-boskos-scale-28/zones/us-east1-b/instances/e2e-113145-95a39-minion-heapster' was not found ERROR: (gcloud.compute.ssh) Could not fetch resource: - The resource 'projects/k8s-infra-e2e-boskos-scale-28/zones/us-east1-b/instances/e2e-113145-95a39-minion-heapster' was not found ERROR: (gcloud.compute.ssh) Could not fetch resource: - The resource 'projects/k8s-infra-e2e-boskos-scale-28/zones/us-east1-b/instances/e2e-113145-95a39-minion-heapster' was not found ERROR: (gcloud.compute.ssh) Could not fetch resource: - The resource 'projects/k8s-infra-e2e-boskos-scale-28/zones/us-east1-b/instances/e2e-113145-95a39-minion-heapster' was not found Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov cl2-* startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from e2e-113145-95a39-minion-heapster ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-infra-e2e-boskos-scale-28/zones/us-east1-b/instances/e2e-113145-95a39-minion-heapster' was not found ERROR: (gcloud.compute.scp) Could not fetch resource: - The resource 'projects/k8s-infra-e2e-boskos-scale-28/zones/us-east1-b/instances/e2e-113145-95a39-minion-heapster' was not found Detecting nodes in the cluster WARNING: The following filter keys were not present in any resource : name, zone WARNING: The following filter keys were not present in any resource : name, zone INSTANCE_GROUPS= ... skipping 61 lines ... W0302 17:43:13.517818 79478 loader.go:222] Config not found: /workspace/.kube/config Property "contexts.k8s-infra-e2e-boskos-scale-28_e2e-113145-95a39" unset. Cleared config for k8s-infra-e2e-boskos-scale-28_e2e-113145-95a39 from /workspace/.kube/config Done 2023/03/02 17:43:13 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 38.18825838s 2023/03/02 17:43:13 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2023/03/02 17:43:13 main.go:328: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1 Traceback (most recent call last): File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module> main(parse_args()) File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main mode.start(runner_args) File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start ... skipping 16 lines ...