PR | rata: hack/local-up-cluster.sh: Remove old dynamic certs |
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 43m21s |
Revision | |
Builder | 79a5f48d-c26e-11ed-b381-5a96e7d210c2 |
Refs |
master:fcf5d23e 116385:f1a51265 |
infra-commit | 5a24afd26 |
repo | k8s.io/kubernetes |
repo-commit | cf4ab47506b66ae3033701b0123597eb1fc2d6e0 |
repos | {u'k8s.io/kubernetes': u'master:fcf5d23e6818ccfbc92f332af82f31661f8d4f88,116385:f1a512657f2d3ca53dd9629d7977c12f561fa3b9'} |
... skipping 250 lines ... I0314 14:03:47.681] make: Entering directory '/go/src/k8s.io/kubernetes' I0314 14:03:47.708] +++ [0314 14:03:47] WARNING: linux/arm will no longer be built/shipped by default, please build it explicitly if needed. I0314 14:03:47.712] +++ [0314 14:03:47] support for linux/arm will be removed in a subsequent release. W0314 14:03:47.812] 2023/03/14 14:03:47 process.go:155: Step 'sh -c docker ps -aq | xargs docker rm -fv' finished in 6.777988385s W0314 14:03:52.869] 2023/03/14 14:03:47 process.go:153: Running: pkill -f cloud-controller-manager W0314 14:03:52.869] 2023/03/14 14:03:47 process.go:155: Step 'pkill -f cloud-controller-manager' finished in 17.505601ms W0314 14:03:52.869] 2023/03/14 14:03:47 local.go:189: unable to kill kubernetes process "cloud-controller-manager": error during pkill -f cloud-controller-manager: exit status 1 W0314 14:03:52.869] 2023/03/14 14:03:47 process.go:153: Running: pkill -f kube-controller-manager W0314 14:03:52.870] 2023/03/14 14:03:47 process.go:155: Step 'pkill -f kube-controller-manager' finished in 4.303492ms W0314 14:03:52.870] 2023/03/14 14:03:47 local.go:189: unable to kill kubernetes process "kube-controller-manager": error during pkill -f kube-controller-manager: exit status 1 W0314 14:03:52.870] 2023/03/14 14:03:47 process.go:153: Running: pkill -f kube-proxy W0314 14:03:52.870] 2023/03/14 14:03:47 process.go:155: Step 'pkill -f kube-proxy' finished in 3.556667ms W0314 14:03:52.870] 2023/03/14 14:03:47 local.go:189: unable to kill kubernetes process "kube-proxy": error during pkill -f kube-proxy: exit status 1 W0314 14:03:52.870] 2023/03/14 14:03:47 process.go:153: Running: pkill -f kube-scheduler W0314 14:03:52.870] 2023/03/14 14:03:47 process.go:155: Step 'pkill -f kube-scheduler' finished in 3.3858ms W0314 14:03:52.870] 2023/03/14 14:03:47 local.go:189: unable to kill kubernetes process "kube-scheduler": error during pkill -f kube-scheduler: exit status 1 W0314 14:03:52.870] 2023/03/14 14:03:47 process.go:153: Running: pkill -f kube-apiserver W0314 14:03:52.870] 2023/03/14 14:03:47 process.go:155: Step 'pkill -f kube-apiserver' finished in 3.38407ms W0314 14:03:52.870] 2023/03/14 14:03:47 local.go:189: unable to kill kubernetes process "kube-apiserver": error during pkill -f kube-apiserver: exit status 1 W0314 14:03:52.870] 2023/03/14 14:03:47 process.go:153: Running: pkill -f kubelet W0314 14:03:52.871] 2023/03/14 14:03:47 process.go:155: Step 'pkill -f kubelet' finished in 3.601903ms W0314 14:03:52.871] 2023/03/14 14:03:47 local.go:189: unable to kill kubernetes process "kubelet": error during pkill -f kubelet: exit status 1 W0314 14:03:52.871] 2023/03/14 14:03:47 process.go:153: Running: pkill etcd W0314 14:03:52.871] 2023/03/14 14:03:47 process.go:155: Step 'pkill etcd' finished in 3.581213ms W0314 14:03:52.871] 2023/03/14 14:03:47 local.go:193: unable to kill etcd: error during pkill etcd: exit status 1 W0314 14:03:52.871] 2023/03/14 14:03:47 local.go:107: using 172.17.0.1 for API_HOST_IP, HOSTNAME_OVERRIDE, KUBELET_HOST W0314 14:03:52.871] 2023/03/14 14:03:47 process.go:153: Running: /go/src/k8s.io/kubernetes/hack/local-up-cluster.sh W0314 14:03:52.871] go version go1.20.2 linux/amd64 I0314 14:03:54.855] +++ [0314 14:03:54] Building go targets for linux/amd64 I0314 14:03:54.908] k8s.io/kubernetes/cmd/kubectl (static) I0314 14:03:54.922] k8s.io/kubernetes/cmd/kube-apiserver (static) ... skipping 260 lines ... I0314 14:16:58.872] clusterrolebinding.rbac.authorization.k8s.io/system:coredns created I0314 14:16:58.922] configmap/coredns created I0314 14:16:58.976] deployment.apps/coredns created I0314 14:16:59.029] service/kube-dns created I0314 14:16:59.035] coredns addon successfully deployed. I0314 14:16:59.040] Checking CNI Installation at /opt/cni/bin I0314 14:16:59.044] WARNING : The kubelet is configured to not fail even if swap is enabled; production deployments should disable swap unless testing NodeSwap feature. W0314 14:16:59.145] 2023/03/14 14:16:59 [INFO] generate received request W0314 14:16:59.354] 2023/03/14 14:16:59 [INFO] received CSR W0314 14:16:59.354] 2023/03/14 14:16:59 [INFO] generating key: rsa-2048 W0314 14:16:59.354] 2023/03/14 14:16:59 [INFO] encoded CSR W0314 14:16:59.360] 2023/03/14 14:16:59 [INFO] signed certificate with serial number 242793357573888658565613193804209343756195656363 W0314 14:16:59.360] 2023/03/14 14:16:59 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for ... skipping 108 lines ... I0314 14:23:06.700] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x71f6f30, 0xc003837650}, {0xc005072067?, 0xc0010ce940?}, {0x71caf80?, 0xc000d4ad08?}, {0xc003447e38, 0x1, 0x1}) I0314 14:23:06.700] vendor/k8s.io/client-go/tools/watch/until.go:113 I0314 14:23:06.701] > k8s.io/kubernetes/test/e2e/apps.testRSLifeCycle({0x7facf82c2730?, 0xc005322b80}, 0xc0002e3590) I0314 14:23:06.701] test/e2e/apps/replica_set.go:533 I0314 14:23:06.701] | ctxUntil, cancel := context.WithTimeout(ctx, f.Timeouts.PodStart) I0314 14:23:06.701] | defer cancel() I0314 14:23:06.701] > _, err = watchtools.Until(ctxUntil, rsList.ResourceVersion, w, func(event watch.Event) (bool, error) { I0314 14:23:06.701] | if rset, ok := event.Object.(*appsv1.ReplicaSet); ok { I0314 14:23:06.701] | found := rset.ObjectMeta.Name == rsName && I0314 14:23:06.701] > k8s.io/kubernetes/test/e2e/apps.glob..func9.6({0x7facf82c2730?, 0xc005322b80?}) I0314 14:23:06.701] test/e2e/apps/replica_set.go:155 I0314 14:23:06.701] | */ I0314 14:23:06.701] | framework.ConformanceIt("Replace and Patch tests", func(ctx context.Context) { ... skipping 4 lines ... I0314 14:23:06.702] vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 I0314 14:23:06.702] k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() I0314 14:23:06.702] vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 I0314 14:23:06.702] k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode I0314 14:23:06.702] vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 I0314 14:23:06.702] ------------------------------ I0314 14:23:06.702] • [FAILED] [309.235 seconds] I0314 14:23:06.702] [sig-apps] ReplicaSet [It] Replace and Patch tests [Conformance] I0314 14:23:06.702] test/e2e/apps/replica_set.go:154 I0314 14:23:06.702] I0314 14:23:06.702] Timeline >> I0314 14:23:06.702] STEP: Creating a kubernetes client @ 03/14/23 14:17:57.463 I0314 14:23:06.703] Mar 14 14:17:57.463: INFO: >>> kubeConfig: /workspace/.kube/config ... skipping 57 lines ... I0314 14:23:06.707] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x71f6f30, 0xc003837650}, {0xc005072067?, 0xc0010ce940?}, {0x71caf80?, 0xc000d4ad08?}, {0xc003447e38, 0x1, 0x1}) I0314 14:23:06.707] vendor/k8s.io/client-go/tools/watch/until.go:113 I0314 14:23:06.707] > k8s.io/kubernetes/test/e2e/apps.testRSLifeCycle({0x7facf82c2730?, 0xc005322b80}, 0xc0002e3590) I0314 14:23:06.707] test/e2e/apps/replica_set.go:533 I0314 14:23:06.707] | ctxUntil, cancel := context.WithTimeout(ctx, f.Timeouts.PodStart) I0314 14:23:06.707] | defer cancel() I0314 14:23:06.707] > _, err = watchtools.Until(ctxUntil, rsList.ResourceVersion, w, func(event watch.Event) (bool, error) { I0314 14:23:06.707] | if rset, ok := event.Object.(*appsv1.ReplicaSet); ok { I0314 14:23:06.707] | found := rset.ObjectMeta.Name == rsName && I0314 14:23:06.707] > k8s.io/kubernetes/test/e2e/apps.glob..func9.6({0x7facf82c2730?, 0xc005322b80?}) I0314 14:23:06.708] test/e2e/apps/replica_set.go:155 I0314 14:23:06.708] | */ I0314 14:23:06.708] | framework.ConformanceIt("Replace and Patch tests", func(ctx context.Context) { ... skipping 3 lines ... I0314 14:23:06.708] k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x71ff2c0?, 0xc005322b80}) I0314 14:23:06.708] vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 I0314 14:23:06.708] k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() I0314 14:23:06.708] vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 I0314 14:23:06.708] k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode I0314 14:23:06.708] vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 I0314 14:23:06.708] Mar 14 14:23:06.558: INFO: Unexpected error: failed to see replicas of test-rs in namespace replicaset-7985 scale to requested amount of 3: I0314 14:23:06.708] <*errors.errorString | 0xc0001ebbe0>: { I0314 14:23:06.708] s: "timed out waiting for the condition", I0314 14:23:06.709] } I0314 14:23:06.709] [FAILED] in [It] - test/e2e/apps/replica_set.go:551 @ 03/14/23 14:23:06.559 I0314 14:23:06.709] Mar 14 14:23:06.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0314 14:23:06.709] STEP: dump namespace information after failure @ 03/14/23 14:23:06.565 I0314 14:23:06.709] STEP: Collecting events from namespace "replicaset-7985". @ 03/14/23 14:23:06.565 I0314 14:23:06.709] STEP: Found 32 events. @ 03/14/23 14:23:06.57 I0314 14:23:06.709] Mar 14 14:23:06.570: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-rs-gcp7z: { } Scheduled: Successfully assigned replicaset-7985/test-rs-gcp7z to 172.17.0.1 I0314 14:23:06.709] Mar 14 14:23:06.570: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-rs-q24k5: { } Scheduled: Successfully assigned replicaset-7985/test-rs-q24k5 to 172.17.0.1 I0314 14:23:06.709] Mar 14 14:23:06.570: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-rs-tchkw: { } Scheduled: Successfully assigned replicaset-7985/test-rs-tchkw to 172.17.0.1 I0314 14:23:06.709] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:17:57 +0000 UTC - event for test-rs: {replicaset-controller } SuccessfulCreate: Created pod: test-rs-gcp7z I0314 14:23:06.709] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:17:59 +0000 UTC - event for test-rs-gcp7z: {kubelet 172.17.0.1} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" I0314 14:23:06.709] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:02 +0000 UTC - event for test-rs-gcp7z: {kubelet 172.17.0.1} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 3.010831542s (3.010841535s including waiting) I0314 14:23:06.709] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:02 +0000 UTC - event for test-rs-gcp7z: {kubelet 172.17.0.1} Failed: Error: failed to get sandbox container task: no running task found: task 06eff78a0f87a2e75b2f366cdf41e2e83782cb51de78215459b25f534e41106b not found: not found I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:03 +0000 UTC - event for test-rs-gcp7z: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:05 +0000 UTC - event for test-rs-gcp7z: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:05 +0000 UTC - event for test-rs-gcp7z: {kubelet 172.17.0.1} Created: Created container httpd I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:05 +0000 UTC - event for test-rs-gcp7z: {kubelet 172.17.0.1} Started: Started container httpd I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:06 +0000 UTC - event for test-rs: {replicaset-controller } SuccessfulCreate: Created pod: test-rs-q24k5 I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:06 +0000 UTC - event for test-rs: {replicaset-controller } SuccessfulCreate: Created pod: test-rs-tchkw I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:09 +0000 UTC - event for test-rs-q24k5: {kubelet 172.17.0.1} Started: Started container httpd I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:09 +0000 UTC - event for test-rs-q24k5: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:09 +0000 UTC - event for test-rs-q24k5: {kubelet 172.17.0.1} Created: Created container httpd I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:09 +0000 UTC - event for test-rs-q24k5: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:09 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} Pulling: Pulling image "registry.k8s.io/pause:3.9" I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:09 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} Pulled: Successfully pulled image "registry.k8s.io/pause:3.9" in 446.268519ms (446.281148ms including waiting) I0314 14:23:06.710] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:09 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} Failed: Error: failed to get sandbox container task: no running task found: task 696fe54ffe9c77e3c258702ba3ee8d2eeda52176ceb6ee2efc51340e290be2e2 not found: not found I0314 14:23:06.711] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:09 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine I0314 14:23:06.712] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:09 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} Failed: Error: failed to get sandbox container task: no running task found: task 696fe54ffe9c77e3c258702ba3ee8d2eeda52176ceb6ee2efc51340e290be2e2 not found: not found I0314 14:23:06.712] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:10 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. I0314 14:23:06.712] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:12 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} Started: Started container test-rs I0314 14:23:06.712] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:12 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} Created: Created container test-rs I0314 14:23:06.712] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:12 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} Created: Created container httpd I0314 14:23:06.712] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:12 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} Started: Started container httpd I0314 14:23:06.712] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:12 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine I0314 14:23:06.712] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:13 +0000 UTC - event for test-rs-gcp7z: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container httpd in pod test-rs-gcp7z_replicaset-7985(c45d5d56-e6b4-40d9-ad96-adec8e54555a) I0314 14:23:06.712] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:15 +0000 UTC - event for test-rs-q24k5: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container httpd in pod test-rs-q24k5_replicaset-7985(0ae890ec-78a9-4032-a148-11ac04b6063e) I0314 14:23:06.712] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:20 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container test-rs in pod test-rs-tchkw_replicaset-7985(88f98aac-c7cf-4622-a794-b411350e9552) I0314 14:23:06.713] Mar 14 14:23:06.570: INFO: At 2023-03-14 14:18:20 +0000 UTC - event for test-rs-tchkw: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container httpd in pod test-rs-tchkw_replicaset-7985(88f98aac-c7cf-4622-a794-b411350e9552) I0314 14:23:06.713] Mar 14 14:23:06.574: INFO: POD NODE PHASE GRACE CONDITIONS I0314 14:23:06.713] Mar 14 14:23:06.574: INFO: test-rs-gcp7z 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:17:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:20:55 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:20:55 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:17:57 +0000 UTC }] I0314 14:23:06.713] Mar 14 14:23:06.574: INFO: test-rs-q24k5 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:18:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:20:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:20:57 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:18:06 +0000 UTC }] I0314 14:23:06.713] Mar 14 14:23:06.574: INFO: test-rs-tchkw 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:18:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:20:59 +0000 UTC ContainersNotReady containers with unready status: [test-rs httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:20:59 +0000 UTC ContainersNotReady containers with unready status: [test-rs httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:18:06 +0000 UTC }] I0314 14:23:06.713] Mar 14 14:23:06.574: INFO: I0314 14:23:06.713] Mar 14 14:23:06.632: INFO: ... skipping 14 lines ... I0314 14:23:06.765] Mar 14 14:23:06.649: INFO: Container test-rs ready: false, restart count 5 I0314 14:23:06.765] Mar 14 14:23:06.689: INFO: I0314 14:23:06.765] Latency metrics for node 172.17.0.1 I0314 14:23:06.765] STEP: Destroying namespace "replicaset-7985" for this suite. @ 03/14/23 14:23:06.689 I0314 14:23:06.765] << Timeline I0314 14:23:06.765] I0314 14:23:06.765] [FAILED] failed to see replicas of test-rs in namespace replicaset-7985 scale to requested amount of 3: timed out waiting for the condition I0314 14:23:06.766] In [It] at: test/e2e/apps/replica_set.go:551 @ 03/14/23 14:23:06.559 I0314 14:23:06.766] ------------------------------ I0314 14:28:14.128] •SSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS I0314 14:28:14.129] ------------------------------ I0314 14:28:34.130] Automatically polling progress: I0314 14:28:34.130] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] (Spec Runtime: 5m0.032s) ... skipping 2 lines ... I0314 14:28:34.130] test/e2e/kubectl/kubectl.go:393 I0314 14:28:34.130] At [By Step] validating guestbook app (Step Runtime: 4m58.266s) I0314 14:28:34.131] test/e2e/kubectl/kubectl.go:403 I0314 14:28:34.131] I0314 14:28:34.131] Begin Captured GinkgoWriter Output >> I0314 14:28:34.131] ... I0314 14:28:34.131] Mar 14 14:28:06.630: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s I0314 14:28:34.131] I0314 14:28:34.131] v1StatusW I0314 14:28:34.131] E0314 14:28:34.133] unexpected error Traceback (most recent call last): File "/workspace/./test-infra/jenkins/bootstrap.py", line 1108, in bootstrap call(job_script(job, args.scenario, args.extra_job_args)) File "/workspace/./test-infra/jenkins/bootstrap.py", line 1056, in <lambda> call = lambda *a, **kw: _call(end, *a, **kw) File "/workspace/./test-infra/jenkins/bootstrap.py", line 134, in _call ... skipping 8 lines ... I0314 14:28:37.437] process 478063 exited with code 0 after 0.0m I0314 14:28:37.438] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com I0314 14:28:37.438] Upload result and artifacts... I0314 14:28:37.438] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/116385/pull-kubernetes-local-e2e/1635638266835243008 I0314 14:28:37.439] Call: gsutil ls gs://kubernetes-jenkins/pr-logs/pull/116385/pull-kubernetes-local-e2e/1635638266835243008/artifacts W0314 14:28:39.813] CommandException: One or more URLs matched no objects. E0314 14:28:40.362] Command failed I0314 14:28:40.363] process 478123 exited with code 1 after 0.0m W0314 14:28:40.363] Remote dir gs://kubernetes-jenkins/pr-logs/pull/116385/pull-kubernetes-local-e2e/1635638266835243008/artifacts not exist yet I0314 14:28:40.363] Call: gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/116385/pull-kubernetes-local-e2e/1635638266835243008/artifacts I0314 14:28:43.352] process 479572 exited with code 0 after 0.0m W0314 14:28:43.353] metadata path /workspace/_artifacts/metadata.json does not exist W0314 14:28:43.353] metadata not found or invalid, init with empty metadata ... skipping 23 lines ...