This job view page is being replaced by Spyglass soon. Check out the new job view.
PR249043822: Fix:Job didn't delete pods within activeDeadlineSeconds
ResultFAILURE
Tests 0 failed / 80 succeeded
Started2022-04-07 14:16
Elapsed1h2m
Revisionfbdf6e779704b45d0eec4fc1b7820cae4de05ac7
Refs 97101
job-versionv1.24.0-beta.0.111+d7b28a72024f06
kubetest-version
revisionv1.24.0-beta.0.111+d7b28a72024f06

No Test Failures!


Show 80 Passed Tests

Error lines from build-log.txt

... skipping 717 lines ...
Looking for address 'e2e-97101-95a39-master-ip'
Looking for address 'e2e-97101-95a39-master-internal-ip'
Using master: e2e-97101-95a39-master (external IP: 35.196.243.182; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-01_e2e-97101-95a39" set.
User "k8s-infra-e2e-boskos-scale-01_e2e-97101-95a39" set.
Context "k8s-infra-e2e-boskos-scale-01_e2e-97101-95a39" created.
Switched to context "k8s-infra-e2e-boskos-scale-01_e2e-97101-95a39".
... skipping 228 lines ...
e2e-97101-95a39-minion-group-x4jk   Ready                         <none>   57s   v1.24.0-beta.0.111+d7b28a72024f06
e2e-97101-95a39-minion-group-xhz5   Ready                         <none>   59s   v1.24.0-beta.0.111+d7b28a72024f06
e2e-97101-95a39-minion-heapster     Ready                         <none>   75s   v1.24.0-beta.0.111+d7b28a72024f06
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-1               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://35.196.243.182
GLBCDefaultBackend is running at https://35.196.243.182/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://35.196.243.182/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://35.196.243.182/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 3015 lines ...
Specify --start=60703 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/cl2-**: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1512071366814208000' using logexporter
namespace/logexporter created
secret/google-service-account created
daemonset.apps/logexporter created
Listing marker files (gs://sig-scalability-logs/pull-kubernetes-e2e-gce-100-performance/1512071366814208000/logexported-nodes-registry) for successful nodes...
CommandException: One or more URLs matched no objects.
... skipping 411 lines ...
Property "users.k8s-infra-e2e-boskos-scale-01_e2e-97101-95a39-basic-auth" unset.
Property "contexts.k8s-infra-e2e-boskos-scale-01_e2e-97101-95a39" unset.
Cleared config for k8s-infra-e2e-boskos-scale-01_e2e-97101-95a39 from /workspace/.kube/config
Done
2022/04/07 15:16:18 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m31.42276706s
2022/04/07 15:16:18 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2022/04/07 15:18:48 main.go:326: [Boskos] Fail To Release: 1 error occurred:
	* Post "http://boskos.test-pods.svc.cluster.local./release?dest=dirty&name=k8s-infra-e2e-boskos-scale-01&owner=pull-kubernetes-e2e-gce-100-performance": dial tcp 10.35.241.148:80: connect: connection refused

, kubetest err: <nil>
2022/04/07 15:18:48 main.go:778: [Boskos] Update of k8s-infra-e2e-boskos-scale-01 failed with no resource name k8s-infra-e2e-boskos-scale-01
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 16 lines ...