This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandrewsykim: support configuration of kube-proxy IPVS tcp,tcpfin,udp timeout
ResultFAILURE
Tests 0 failed / 87 succeeded
Started2019-12-17 09:23
Elapsed12m9s
Revision0ea25ec674978550813b505585b0c2174fcd9d35
Refs 85517

No Test Failures!


Show 87 Passed Tests

Error lines from build-log.txt

... skipping 56 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [1217 09:28:21] Call tree:
!!! [1217 09:28:21]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [1217 09:28:21]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [1217 09:28:21]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [1217 09:28:21]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [1217 09:28:21]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [1217 09:28:21] Running kubeadm tests
+++ [1217 09:28:29] Building go targets for linux/amd64:
    cmd/kubeadm
hack/make-rules/test.sh: line 191: KUBE_TEST_API: unbound variable
+++ [1217 09:29:22] Running tests without code coverage
{"Time":"2019-12-17T09:31:07.832721684Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t59.948s\n"}
... skipping 303 lines ...
+++ [1217 09:33:06] Building kube-controller-manager
+++ [1217 09:33:12] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [1217 09:33:45] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I1217 09:33:46.771855   54583 serving.go:312] Generated self-signed cert in-memory
W1217 09:33:47.207867   54583 authentication.go:409] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1217 09:33:47.207924   54583 authentication.go:267] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W1217 09:33:47.207935   54583 authentication.go:291] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W1217 09:33:47.207953   54583 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1217 09:33:47.207976   54583 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I1217 09:33:47.208007   54583 controllermanager.go:161] Version: v1.18.0-alpha.0.1812+5ad586f84e16e5
I1217 09:33:47.209272   54583 secure_serving.go:178] Serving securely on [::]:10257
I1217 09:33:47.209374   54583 tlsconfig.go:219] Starting DynamicServingCertificateController
I1217 09:33:47.209793   54583 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I1217 09:33:47.209856   54583 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 18 lines ...
W1217 09:33:47.508857   54583 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1217 09:33:47.508871   54583 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1217 09:33:47.508882   54583 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1217 09:33:47.508893   54583 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1217 09:33:47.508939   54583 controllermanager.go:533] Started "disruption"
W1217 09:33:47.508950   54583 controllermanager.go:512] "tokencleaner" is disabled
E1217 09:33:47.509512   54583 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1217 09:33:47.509539   54583 controllermanager.go:525] Skipping "service"
W1217 09:33:47.509802   54583 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1217 09:33:47.509825   54583 controllermanager.go:533] Started "csrcleaner"
W1217 09:33:47.509834   54583 controllermanager.go:525] Skipping "nodeipam"
I1217 09:33:47.510126   54583 node_lifecycle_controller.go:77] Sending events to api server
E1217 09:33:47.510166   54583 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided
W1217 09:33:47.510175   54583 controllermanager.go:525] Skipping "cloud-node-lifecycle"
W1217 09:33:47.510508   54583 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1217 09:33:47.510644   54583 controllermanager.go:533] Started "clusterrole-aggregation"
W1217 09:33:47.510658   54583 controllermanager.go:525] Skipping "ttl-after-finished"
I1217 09:33:47.511165   54583 controllermanager.go:533] Started "replicationcontroller"
I1217 09:33:47.511572   54583 controllermanager.go:533] Started "cronjob"
... skipping 129 lines ...
W1217 09:33:48.318892   54583 controllermanager.go:512] "bootstrapsigner" is disabled
I1217 09:33:48.319245   54583 namespace_controller.go:200] Starting namespace controller
I1217 09:33:48.319259   54583 shared_informer.go:197] Waiting for caches to sync for namespace
I1217 09:33:48.319296   54583 horizontal.go:168] Starting HPA controller
I1217 09:33:48.319302   54583 shared_informer.go:197] Waiting for caches to sync for HPA
I1217 09:33:48.333405   54583 shared_informer.go:204] Caches are synced for job 
W1217 09:33:48.345256   54583 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I1217 09:33:48.387103   54583 shared_informer.go:204] Caches are synced for taint 
I1217 09:33:48.387198   54583 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
I1217 09:33:48.387281   54583 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1217 09:33:48.387715   54583 taint_manager.go:186] Starting NoExecuteTaintManager
I1217 09:33:48.388373   54583 shared_informer.go:204] Caches are synced for PV protection 
I1217 09:33:48.388528   54583 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"86db6eb1-92be-433d-8bb7-5eb53333178f", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
... skipping 3 lines ...
I1217 09:33:48.416640   54583 shared_informer.go:204] Caches are synced for TTL 
I1217 09:33:48.416655   54583 shared_informer.go:204] Caches are synced for service account 
I1217 09:33:48.417126   54583 shared_informer.go:204] Caches are synced for GC 
I1217 09:33:48.419413   54583 shared_informer.go:204] Caches are synced for namespace 
I1217 09:33:48.419495   54583 shared_informer.go:204] Caches are synced for HPA 
I1217 09:33:48.419589   51138 controller.go:606] quota admission added evaluator for: serviceaccounts
E1217 09:33:48.464609   54583 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
I1217 09:33:48.686022   54583 shared_informer.go:204] Caches are synced for daemon sets 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   43s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests
... skipping 88 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [1217 09:33:53] Creating namespace namespace-1576575233-9409
namespace/namespace-1576575233-9409 created
Context "test" modified.
+++ [1217 09:33:53] Testing RESTMapper
+++ [1217 09:33:54] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 601 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 12 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 188 lines ...
(Bpod/valid-pod patched
core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
(Bpod/valid-pod patched
core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [1217 09:34:49] "kubectl patch with resourceVersion 546" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W1217 09:34:50.892017   54583 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test replaced
core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
(Bnode "node-v1-test" deleted
core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
(BEdit cancelled, no changes made.
... skipping 22 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 85 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [1217 09:35:05] Creating namespace namespace-1576575305-17381
namespace/namespace-1576575305-17381 created
Context "test" modified.
+++ [1217 09:35:05] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [1217 09:35:05] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I1217 09:35:10.536971   51138 client.go:361] parsed scheme: "endpoint"
I1217 09:35:10.537026   51138 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1217 09:35:10.541775   51138 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 102 lines ...
Context "test" modified.
+++ [1217 09:35:14] Testing kubectl create filter
create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 30 lines ...
I1217 09:35:19.266222   54583 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1576575315-21426", Name:"nginx-8484dd655", UID:"272382e4-408a-4d63-b761-3db7d141ba40", APIVersion:"apps/v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-7d9nf
I1217 09:35:19.272071   54583 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1576575315-21426", Name:"nginx-8484dd655", UID:"272382e4-408a-4d63-b761-3db7d141ba40", APIVersion:"apps/v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-scddq
I1217 09:35:19.272583   54583 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1576575315-21426", Name:"nginx-8484dd655", UID:"272382e4-408a-4d63-b761-3db7d141ba40", APIVersion:"apps/v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-cfh6d
I1217 09:35:19.378630   54583 horizontal.go:353] Horizontal Pod Autoscaler frontend has been deleted in namespace-1576575301-21905
apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1576575315-21426\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1576575315-21426"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I1217 09:35:29.048712   54583 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1576575315-21426", Name:"nginx", UID:"8316ba26-e797-4b24-a86e-fa101ab6f8d3", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
I1217 09:35:29.051823   54583 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1576575315-21426", Name:"nginx-668b6c7744", UID:"37e81cee-21fd-466a-9425-dbcd16757089", APIVersion:"apps/v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-85plm
I1217 09:35:29.131064   54583 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1576575315-21426", Name:"nginx-668b6c7744", UID:"37e81cee-21fd-466a-9425-dbcd16757089", APIVersion:"apps/v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-xlt98
I1217 09:35:29.131594   54583 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1576575315-21426", Name:"nginx-668b6c7744", UID:"37e81cee-21fd-466a-9425-dbcd16757089", APIVersion:"apps/v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-tzq4h
Successful
... skipping 141 lines ...
+++ [1217 09:35:37] Creating namespace namespace-1576575337-24764
namespace/namespace-1576575337-24764 created
Context "test" modified.
+++ [1217 09:35:37] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1576575337-24764 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1576575337-24764 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I1217 09:35:41.177530   65020 loader.go:375] Config loaded from file:  /tmp/tmp.wNgaaGSWFB/.kube/config
I1217 09:35:41.179077   65020 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I1217 09:35:41.221586   65020 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I1217 09:35:41.223861   65020 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 479 lines ...
Successful
message:NAME    DATA   AGE
one     0      1s
three   0      0s
two     0      1s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [1217 09:35:48] Creating namespace namespace-1576575348-3427
namespace/namespace-1576575348-3427 created
Context "test" modified.
get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 56 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-12-17T09:35:48Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1576575348-3427", "resourceVersion":"783", "selfLink":"/api/v1/namespaces/namespace-1576575348-3427/pods/valid-pod", "uid":"86367594-10ef-4333-b387-b65fe9f31ca0"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-12-17T09:35:48Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1576575348-3427","resourceVersion":"783","selfLink":"/api/v1/namespaces/namespace-1576575348-3427/pods/valid-pod","uid":"86367594-10ef-4333-b387-b65fe9f31ca0"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-12-17T09:35:48Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1576575348-3427 resourceVersion:783 selfLink:/api/v1/namespaces/namespace-1576575348-3427/pods/valid-pod uid:86367594-10ef-4333-b387-b65fe9f31ca0] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
status/<unknown>
has not:STATUS
Successful
... skipping 45 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has not:STATUS
... skipping 42 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 35 lines ...
+++ command: run_kubectl_exec_pod_tests
+++ [1217 09:35:55] Creating namespace namespace-1576575355-2905
namespace/namespace-1576575355-2905 created
Context "test" modified.
+++ [1217 09:35:56] Testing kubectl exec POD COMMAND
Successful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 2 lines ...
+++ command: run_kubectl_exec_resource_name_tests
+++ [1217 09:35:57] Creating namespace namespace-1576575357-7496
namespace/namespace-1576575357-7496 created
Context "test" modified.
+++ [1217 09:35:57] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:error: the server doesn't have a resource type "foo"
has:error:
Successful
message:Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I1217 09:35:58.507147   54583 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1576575357-7496", Name:"frontend", UID:"b5de375c-d12e-46dd-96d1-972998a5f3eb", APIVersion:"apps/v1", ResourceVersion:"841", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-htxs8
I1217 09:35:58.511580   54583 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1576575357-7496", Name:"frontend", UID:"b5de375c-d12e-46dd-96d1-972998a5f3eb", APIVersion:"apps/v1", ResourceVersion:"841", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gmqxg
I1217 09:35:58.512501   54583 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1576575357-7496", Name:"frontend", UID:"b5de375c-d12e-46dd-96d1-972998a5f3eb", APIVersion:"apps/v1", ResourceVersion:"841", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-576d2
configmap/test-set-env-config created
Successful
message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
Successful
message:Error from server (BadRequest): pod frontend-576d2 does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod frontend-576d2 does not have a host assigned
has not:pod or type/name must be specified
{"component":"entrypoint","file":"prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2019-12-17T09:35:59Z"}
pod "test-pod" deleted