This job view page is being replaced by Spyglass soon. Check out the new job view.
PRenj: auth/exec: prevent multiple calls when expirationTimestamp is invalid
ResultABORTED
Tests 0 failed / 134 succeeded
Started2022-05-10 19:34
Elapsed17m28s
Revisionf428356f368bc8b8ced31601eb1a361c3ff994cc
Refs 106768

No Test Failures!


Show 134 Passed Tests

Error lines from build-log.txt

... skipping 75 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 162: bogus-expected-to-fail: command not found
!!! [0510 19:40:11] Call tree:
!!! [0510 19:40:11]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0510 19:40:11]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0510 19:40:11]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:138 juLog(...)
!!! [0510 19:40:11]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:166 record_command(...)
!!! [0510 19:40:11]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0510 19:40:11] Running kubeadm tests
+++ [0510 19:40:12] Building go targets for linux/amd64
    k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
+++ [0510 19:40:16] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kubeadm (static)
+++ [0510 19:41:04] Building go targets for linux/amd64
... skipping 214 lines ...
    k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
+++ [0510 19:44:19] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kube-controller-manager (static)
+++ [0510 19:44:49] Generate kubeconfig for controller-manager
+++ [0510 19:44:49] Starting controller-manager
I0510 19:44:50.469553   56684 serving.go:348] Generated self-signed cert in-memory
W0510 19:44:51.134675   56684 authentication.go:423] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0510 19:44:51.134816   56684 authentication.go:317] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0510 19:44:51.134835   56684 authentication.go:341] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0510 19:44:51.134870   56684 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0510 19:44:51.134892   56684 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0510 19:44:51.134940   56684 controllermanager.go:180] Version: v1.25.0-alpha.0.400+8466cb89bb9493
I0510 19:44:51.134970   56684 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0510 19:44:51.137185   56684 secure_serving.go:210] Serving securely on [::]:10257
I0510 19:44:51.137207   56684 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0510 19:44:51.137485   56684 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
... skipping 53 lines ...
I0510 19:44:51.278939   56684 shared_informer.go:255] Waiting for caches to sync for attach detach
I0510 19:44:51.278963   56684 endpointslice_controller.go:257] Starting endpoint slice controller
I0510 19:44:51.278979   56684 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
I0510 19:44:51.279008   56684 node_lifecycle_controller.go:77] Sending events to api server
I0510 19:44:51.279010   56684 replica_set.go:205] Starting replicaset controller
I0510 19:44:51.279021   56684 shared_informer.go:255] Waiting for caches to sync for ReplicaSet
E0510 19:44:51.279040   56684 core.go:211] failed to start cloud node lifecycle controller: no cloud provider provided
W0510 19:44:51.279055   56684 controllermanager.go:571] Skipping "cloud-node-lifecycle"
I0510 19:44:51.279384   56684 controllermanager.go:593] Started "ttl-after-finished"
I0510 19:44:51.279555   56684 ttlafterfinished_controller.go:109] Starting TTL after finished controller
I0510 19:44:51.279582   56684 shared_informer.go:255] Waiting for caches to sync for TTL after finished
W0510 19:44:51.279689   56684 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0510 19:44:51.279726   56684 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
... skipping 93 lines ...
I0510 19:44:51.299222   56684 controllermanager.go:593] Started "resourcequota"
I0510 19:44:51.299866   56684 controllermanager.go:593] Started "garbagecollector"
I0510 19:44:51.300198   56684 controllermanager.go:593] Started "disruption"
I0510 19:44:51.300504   56684 controllermanager.go:593] Started "statefulset"
I0510 19:44:51.300811   56684 controllermanager.go:593] Started "cronjob"
I0510 19:44:51.301098   56684 controllermanager.go:593] Started "deployment"
E0510 19:44:51.301415   56684 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0510 19:44:51.301443   56684 controllermanager.go:571] Skipping "service"
I0510 19:44:51.304582   56684 resource_quota_controller.go:273] Starting resource quota controller
I0510 19:44:51.304604   56684 shared_informer.go:255] Waiting for caches to sync for resource quota
I0510 19:44:51.304844   56684 garbagecollector.go:149] Starting garbage collector controller
I0510 19:44:51.304865   56684 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0510 19:44:51.305106   56684 disruption.go:363] Starting disruption controller
... skipping 56 lines ...
I0510 19:44:51.692598   56684 shared_informer.go:262] Caches are synced for certificate-csrapproving
I0510 19:44:51.705125   56684 shared_informer.go:262] Caches are synced for resource quota
I0510 19:44:52.136919   56684 shared_informer.go:262] Caches are synced for garbage collector
I0510 19:44:52.205631   56684 shared_informer.go:262] Caches are synced for garbage collector
I0510 19:44:52.205655   56684 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
node/127.0.0.1 created
W0510 19:44:52.314311   56684 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+++ [0510 19:44:52] Checking kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.400+8466cb89bb9493", GitCommit:"8466cb89bb949308f7f39abdc0644d4ef0ef3d4d", GitTreeState:"clean", BuildDate:"2022-05-10T17:03:43Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.400+8466cb89bb9493", GitCommit:"8466cb89bb949308f7f39abdc0644d4ef0ef3d4d", GitTreeState:"clean", BuildDate:"2022-05-10T17:03:43Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   40s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 196 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0510 19:44:57] Creating namespace namespace-1652211897-8024
namespace/namespace-1652211897-8024 created
Context "test" modified.
+++ [0510 19:44:57] Testing RESTMapper
+++ [0510 19:44:58] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 61 lines ...
namespace/namespace-1652211904-17529 created
Context "test" modified.
+++ [0510 19:45:04] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
(Bmessage:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1652211912-15623 created
Context "test" modified.
+++ [0510 19:45:12] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 439 lines ...
has:valid-pod
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I0510 19:45:25.317245   61473 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I0510 19:45:25.318941   61473 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3Deac5d458-1ff2-406a-943a-5037afbb3959&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 232 lines ...