PR | klueska: Update kubeletplugin API for DRA to v1alpha2 |
Result | FAILURE |
Tests | 21 failed / 21 succeeded |
Started | |
Elapsed | 19m11s |
Revision | fcd0d91b755a569a09d3f3396269e3c3dc80b9f0 |
Refs |
116558 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\sdelayed\sallocation\ssupports\sexternal\sclaim\sreferenced\sby\smultiple\scontainers\sof\smultiple\spods$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:19Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:27Z" finalizers: - dra-654.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-654.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T14:01:19Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:19Z" name: external-claim namespace: dra-654 resourceVersion: "3204" uid: beddf209-626f-441a-9ac9-99efffaa2754 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-654-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-654.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:27.822from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:15.452 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 14:01:15.452 Mar 14 14:01:15.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 14:01:15.453 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 14:01:15.467 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 14:01:15.472 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:15.477 (26ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:15.477 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:15.477 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:15.477 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 14:01:15.477 Mar 14 14:01:15.482: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:15.482 (5ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:15.482 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 14:01:15.482 Mar 14 14:01:15.483: INFO: creating *v1.ReplicaSet: dra-654/dra-test-driver I0314 14:01:15.484183 66264 controller.go:295] "resource controller: Starting" driver="dra-654.k8s.io" I0314 14:01:17.508982 66264 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" I0314 14:01:17.509014 66264 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" I0314 14:01:17.510054 66264 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" I0314 14:01:17.510071 66264 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 14:01:17.51 I0314 14:01:17.713445 66264 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" requestID=1 request="&InfoRequest{}" I0314 14:01:17.713502 66264 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-654.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-654.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:17.714264 66264 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" requestID=1 request="&InfoRequest{}" I0314 14:01:17.714293 66264 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-654.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-654.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:17.715499 66264 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:17.715544 66264 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" requestID=2 response="&RegistrationStatusResponse{}" I0314 14:01:17.720116 66264 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:17.720149 66264 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:19.511 (4.029s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:19.511 STEP: creating *v1alpha2.ResourceClass dra-654-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.511 END STEP: creating *v1alpha2.ResourceClass dra-654-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.525 (15ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:19.526 (15ms) > Enter [It] supports external claim referenced by multiple containers of multiple pods - test/e2e/dra/dra.go:209 @ 03/14/23 14:01:19.526 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.526 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.533 (7ms) STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.533 END STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.558 (25ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.558 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.576 (18ms) STEP: creating *v1.Pod tester-2 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.576 END STEP: creating *v1.Pod tester-2 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.582 (6ms) STEP: creating *v1.Pod tester-3 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.582 END STEP: creating *v1.Pod tester-3 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.599 (17ms) I0314 14:01:20.106300 66264 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-654,ClaimUid:beddf209-626f-441a-9ac9-99efffaa2754,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-654.k8s.io/test","devices":[{"name":"claim-beddf209-626f-441a-9ac9-99efffaa2754","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:01:20.106 Mar 14 14:01:20.106: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:20.107: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:20.107: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-654/pods/dra-test-driver-nvf7d/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTY1NC5rOHMuaW8vdGVzdCIsImRldmljZXMiOlt7Im5hbWUiOiJjbGFpbS1iZWRkZjIwOS02MjZmLTQ0MWEtOWFjOS05OWVmZmZhYTI3NTQiLCJjb250YWluZXJFZGl0cyI6eyJlbnYiOlsidXNlcl9hPWIiXX19XX0%3D%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:20.230731 66264 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTY1NC5rOHMuaW8vdGVzdCIsImRldmljZXMiOlt7Im5hbWUiOiJjbGFpbS1iZWRkZjIwOS02MjZmLTQ0MWEtOWFjOS05OWVmZmZhYTI3NTQiLCJjb250YWluZXJFZGl0cyI6eyJlbnYiOlsidXNlcl9hPWIiXX19XX0= EOF] > stdout="" stderr="" err=<nil> Mar 14 14:01:20.230: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:20.231: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:20.231: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-654/pods/dra-test-driver-nvf7d/exec?command=mv&command=%2Fcdi%2Fdra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json.tmp&command=%2Fcdi%2Fdra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:20.350567 66264 io.go:119] "Command completed" command=[mv /cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json.tmp /cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json] stdout="" stderr="" err=<nil> I0314 14:01:20.350627 66264 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-654.k8s.io/test=claim-beddf209-626f-441a-9ac9-99efffaa2754],}" I0314 14:01:21.413053 66264 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-654,ClaimUid:beddf209-626f-441a-9ac9-99efffaa2754,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json on node kind-worker: {"cdiVersion":"0.3.0","kind":"dra-654.k8s.io/test","devices":[{"name":"claim-beddf209-626f-441a-9ac9-99efffaa2754","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:01:21.413 Mar 14 14:01:21.413: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:21.414: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:21.414: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-654/pods/dra-test-driver-zrgsx/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTY1NC5rOHMuaW8vdGVzdCIsImRldmljZXMiOlt7Im5hbWUiOiJjbGFpbS1iZWRkZjIwOS02MjZmLTQ0MWEtOWFjOS05OWVmZmZhYTI3NTQiLCJjb250YWluZXJFZGl0cyI6eyJlbnYiOlsidXNlcl9hPWIiXX19XX0%3D%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:21.546060 66264 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTY1NC5rOHMuaW8vdGVzdCIsImRldmljZXMiOlt7Im5hbWUiOiJjbGFpbS1iZWRkZjIwOS02MjZmLTQ0MWEtOWFjOS05OWVmZmZhYTI3NTQiLCJjb250YWluZXJFZGl0cyI6eyJlbnYiOlsidXNlcl9hPWIiXX19XX0= EOF] > stdout="" stderr="" err=<nil> Mar 14 14:01:21.546: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:21.547: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:21.547: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-654/pods/dra-test-driver-zrgsx/exec?command=mv&command=%2Fcdi%2Fdra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json.tmp&command=%2Fcdi%2Fdra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:21.668021 66264 io.go:119] "Command completed" command=[mv /cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json.tmp /cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json] stdout="" stderr="" err=<nil> I0314 14:01:21.668077 66264 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-654.k8s.io/test=claim-beddf209-626f-441a-9ac9-99efffaa2754],}" < Exit [It] supports external claim referenced by multiple containers of multiple pods - test/e2e/dra/dra.go:209 @ 03/14/23 14:01:23.729 (4.204s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:23.729 Mar 14 14:01:23.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:23.733 (4ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:23.733 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:01:23.743 STEP: deleting *v1.Pod dra-654/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:23.748 STEP: deleting *v1.Pod dra-654/tester-2 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:23.759 STEP: deleting *v1.Pod dra-654/tester-3 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:23.779 I0314 14:01:25.758561 66264 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-654,ClaimUid:beddf209-626f-441a-9ac9-99efffaa2754,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 14:01:25.758 Mar 14 14:01:25.758: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:25.759: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:25.760: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-654/pods/dra-test-driver-nvf7d/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:25.866914 66264 io.go:119] "Command completed" command=[rm -rf /cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json] stdout="" stderr="" err=<nil> I0314 14:01:25.866965 66264 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" requestID=2 response="&NodeUnprepareResourceResponse{}" I0314 14:01:26.376513 66264 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-654,ClaimUid:beddf209-626f-441a-9ac9-99efffaa2754,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json on node kind-worker - test/e2e/dra/deploy.go:221 @ 03/14/23 14:01:26.376 Mar 14 14:01:26.376: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:26.377: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:26.377: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-654/pods/dra-test-driver-zrgsx/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:26.478077 66264 io.go:119] "Command completed" command=[rm -rf /cdi/dra-654.k8s.io-beddf209-626f-441a-9ac9-99efffaa2754.json] stdout="" stderr="" err=<nil> I0314 14:01:26.478116 66264 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting *v1alpha2.ResourceClaim dra-654/external-claim - test/e2e/dra/dra.go:796 @ 03/14/23 14:01:27.815 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:27.82 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:27.82 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:01:27.82 E0314 14:01:27.825472 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:27.834474 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:27.849771 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:27.875160 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:27.919832 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:28.010774 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:28.175803 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:28.503465 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:29.149472 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:30.434644 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:33.000115 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:38.128107 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:01:48.373720 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" E0314 14:02:08.858399 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-654/external-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:19Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:27Z" finalizers: - dra-654.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-654.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T14:01:19Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:19Z" name: external-claim namespace: dra-654 resourceVersion: "3204" uid: beddf209-626f-441a-9ac9-99efffaa2754 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-654-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-654.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:27.822 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:02:27.822 (1m4.089s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:27.822 I0314 14:02:27.823474 66264 controller.go:310] "resource controller: Shutting down" driver="dra-654.k8s.io" E0314 14:02:27.824079 66264 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" E0314 14:02:27.824081 66264 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-654/dra-test-driver-nvf7d" E0314 14:02:27.825065 66264 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-654/dra-test-driver-zrgsx" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:27.825 (2ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-654/dra-test-driver | create.go:156 @ 03/14/23 14:02:27.825 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-654/dra-test-driver | create.go:156 @ 03/14/23 14:02:27.836 (12ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:27.836 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:27.836 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:27.836 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:27.836 STEP: Collecting events from namespace "dra-654". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:02:27.836 STEP: Found 58 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:02:27.841 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:15 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-nvf7d Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:15 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-zrgsx Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:15 +0000 UTC - event for dra-test-driver-nvf7d: {default-scheduler } Scheduled: Successfully assigned dra-654/dra-test-driver-nvf7d to kind-worker2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:15 +0000 UTC - event for dra-test-driver-zrgsx: {default-scheduler } Scheduled: Successfully assigned dra-654/dra-test-driver-zrgsx to kind-worker Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-nvf7d: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-nvf7d: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-nvf7d: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-nvf7d: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-nvf7d: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-nvf7d: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-zrgsx: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-zrgsx: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-zrgsx: {kubelet kind-worker} Created: Created container plugin Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-zrgsx: {kubelet kind-worker} Started: Started container plugin Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-zrgsx: {kubelet kind-worker} Started: Started container registrar Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-zrgsx: {kubelet kind-worker} Created: Created container registrar Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:19 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:19 +0000 UTC - event for tester-2: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:19 +0000 UTC - event for tester-3: {default-scheduler } Scheduled: Successfully assigned dra-654/tester-3 to kind-worker2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-654/tester-1 to kind-worker2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for tester-2: {default-scheduler } Scheduled: Successfully assigned dra-654/tester-2 to kind-worker Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for tester-3: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for tester-3: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource-1 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-2: {kubelet kind-worker} Created: Created container with-resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-2: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-3: {kubelet kind-worker2} Started: Started container with-resource-1 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-3: {kubelet kind-worker2} Started: Started container with-resource-1-2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-3: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-3: {kubelet kind-worker2} Created: Created container with-resource-1-2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-3: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-3: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-3: {kubelet kind-worker2} Created: Created container with-resource-1 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource-1 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource-1-2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource-1-2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-2: {kubelet kind-worker} Started: Started container with-resource-1 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-2: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-2: {kubelet kind-worker} Created: Created container with-resource-1-2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-2: {kubelet kind-worker} Started: Started container with-resource-1-2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-2: {kubelet kind-worker} Created: Created container with-resource-1 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-2: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:22 +0000 UTC - event for tester-2: {kubelet kind-worker} Started: Started container with-resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:23 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:23 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource-1 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:23 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource-1-2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for tester-3: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for tester-3: {kubelet kind-worker2} Killing: Stopping container with-resource-1-2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for tester-3: {kubelet kind-worker2} Killing: Stopping container with-resource-1 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:25 +0000 UTC - event for tester-2: {kubelet kind-worker} Killing: Stopping container with-resource-1 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:25 +0000 UTC - event for tester-2: {kubelet kind-worker} Killing: Stopping container with-resource-1-2 Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:25 +0000 UTC - event for tester-2: {kubelet kind-worker} Killing: Stopping container with-resource Mar 14 14:02:27.841: INFO: At 2023-03-14 14:01:27 +0000 UTC - event for external-claim: {resource driver dra-654.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "external-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:02:27.844: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:02:27.844: INFO: dra-test-driver-nvf7d kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:15 +0000 UTC }] Mar 14 14:02:27.844: INFO: dra-test-driver-zrgsx kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:15 +0000 UTC }] Mar 14 14:02:27.844: INFO: Mar 14 14:02:27.906: INFO: Logging node info for node kind-control-plane Mar 14 14:02:27.913: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:27.913: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:02:27.919: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:02:27.929: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:27.929: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:02:27.929: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:27.929: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:27.929: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:27.929: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:27.929: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:27.929: INFO: Container etcd ready: true, restart count 0 Mar 14 14:02:27.929: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:27.929: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:02:27.929: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:27.929: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:27.929: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:27.929: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:27.929: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:27.929: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:02:27.929: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:27.929: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:02:28.009: INFO: Latency metrics for node kind-control-plane Mar 14 14:02:28.009: INFO: Logging node info for node kind-worker Mar 14 14:02:28.013: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:28.013: INFO: Logging kubelet events for node kind-worker Mar 14 14:02:28.017: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:02:28.026: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:28.026: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:28.026: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:28.026: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:28.026: INFO: dra-test-driver-zg2wf started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.026: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.026: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.026: INFO: dra-test-driver-zrgsx started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.026: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.026: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.026: INFO: dra-test-driver-xrvr8 started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.026: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.026: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.026: INFO: dra-test-driver-dx7tb started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.026: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.026: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.026: INFO: dra-test-driver-9bgm4 started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.026: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.026: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.026: INFO: dra-test-driver-wsdzm started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.026: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.026: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.102: INFO: Latency metrics for node kind-worker Mar 14 14:02:28.102: INFO: Logging node info for node kind-worker2 Mar 14 14:02:28.109: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:28.109: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:02:28.114: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:02:28.123: INFO: dra-test-driver-w6qnj started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.123: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.123: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.123: INFO: dra-test-driver-h6qql started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.123: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.123: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.123: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:28.123: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:28.123: INFO: dra-test-driver-9f5vg started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.123: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.123: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.123: INFO: dra-test-driver-nvf7d started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.123: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.123: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.123: INFO: dra-test-driver-lgdqg started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:28.123: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:28.123: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:28.123: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:28.123: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:28.174: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:28.174 (338ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:28.174 (338ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:28.174 STEP: Destroying namespace "dra-654" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:02:28.174 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:28.182 (8ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:28.183 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:28.183 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\sdelayed\sallocation\ssupports\sexternal\sclaim\sreferenced\sby\smultiple\spods$'
[FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:59:01Z" finalizers: - dra-2381.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2381.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:51Z" name: external-claim namespace: dra-2381 resourceVersion: "1282" uid: c616330f-07b6-4656-b524-95a90bb804cb spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2381-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2381.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:00:02.013from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.55 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 13:58:45.55 Mar 14 13:58:45.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 13:58:45.551 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 13:58:45.618 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 13:58:45.625 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.635 (85ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.635 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.635 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.635 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 13:58:45.635 Mar 14 13:58:45.644: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.644 (9ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:45.644 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 13:58:45.644 I0314 13:58:45.644907 66289 controller.go:295] "resource controller: Starting" driver="dra-2381.k8s.io" Mar 14 13:58:45.648: INFO: creating *v1.ReplicaSet: dra-2381/dra-test-driver I0314 13:58:49.689591 66289 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" I0314 13:58:49.689613 66289 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" I0314 13:58:49.690556 66289 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" I0314 13:58:49.690578 66289 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 13:58:49.69 I0314 13:58:50.263478 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" requestID=1 request="&InfoRequest{}" I0314 13:58:50.263601 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2381.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2381.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:50.272045 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" requestID=1 request="&InfoRequest{}" I0314 13:58:50.272096 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2381.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2381.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:50.280712 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:50.280748 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" requestID=2 response="&RegistrationStatusResponse{}" I0314 13:58:50.290162 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:50.290191 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:51.69 (6.047s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.69 STEP: creating *v1alpha2.ResourceClass dra-2381-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.69 END STEP: creating *v1alpha2.ResourceClass dra-2381-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.706 (16ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.706 (16ms) > Enter [It] supports external claim referenced by multiple pods - test/e2e/dra/dra.go:196 @ 03/14/23 13:58:51.706 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.706 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.755 (48ms) STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.755 END STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.806 (51ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.806 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.823 (17ms) STEP: creating *v1.Pod tester-2 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.823 END STEP: creating *v1.Pod tester-2 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.829 (6ms) STEP: creating *v1.Pod tester-3 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.829 END STEP: creating *v1.Pod tester-3 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.843 (14ms) I0314 13:58:52.265978 66289 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-2381,ClaimUid:c616330f-07b6-4656-b524-95a90bb804cb,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-2381.k8s.io/test","devices":[{"name":"claim-c616330f-07b6-4656-b524-95a90bb804cb","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 13:58:52.266 Mar 14 13:58:52.266: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:52.267: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:52.267: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2381/pods/dra-test-driver-lgb7f/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTIzODEuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tYzYxNjMzMGYtMDdiNi00NjU2LWI1MjQtOTVhOTBiYjgwNGNiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:52.390734 66289 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTIzODEuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tYzYxNjMzMGYtMDdiNi00NjU2LWI1MjQtOTVhOTBiYjgwNGNiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 13:58:52.390: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:52.391: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:52.391: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2381/pods/dra-test-driver-lgb7f/exec?command=mv&command=%2Fcdi%2Fdra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json.tmp&command=%2Fcdi%2Fdra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:52.502535 66289 io.go:119] "Command completed" command=[mv /cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json.tmp /cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json] stdout="" stderr="" err=<nil> I0314 13:58:52.502622 66289 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-2381.k8s.io/test=claim-c616330f-07b6-4656-b524-95a90bb804cb],}" E0314 13:58:53.511398 66289 portproxy.go:243] port forwarding for dra-2381/dra-test-driver-lgb7f:9001 #0: an error occurred connecting to the remote port: error forwarding port 9001 to pod eb87ab85cbe11c5c196d4cf06b34540e0f8de8285ed3bc16710d8db552649653, uid : failed to execute portforward in network namespace "/var/run/netns/cni-f1874a7d-2078-ffaa-db96-68daa8a929ff": read tcp4 127.0.0.1:42380->127.0.0.1:9001: read: connection reset by peer I0314 13:58:54.313339 66289 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-2381,ClaimUid:c616330f-07b6-4656-b524-95a90bb804cb,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json on node kind-worker: {"cdiVersion":"0.3.0","kind":"dra-2381.k8s.io/test","devices":[{"name":"claim-c616330f-07b6-4656-b524-95a90bb804cb","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 13:58:54.313 Mar 14 13:58:54.313: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:54.314: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:54.314: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2381/pods/dra-test-driver-86zdr/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTIzODEuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tYzYxNjMzMGYtMDdiNi00NjU2LWI1MjQtOTVhOTBiYjgwNGNiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:54.542679 66289 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTIzODEuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tYzYxNjMzMGYtMDdiNi00NjU2LWI1MjQtOTVhOTBiYjgwNGNiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 13:58:54.542: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:54.543: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:54.543: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2381/pods/dra-test-driver-86zdr/exec?command=mv&command=%2Fcdi%2Fdra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json.tmp&command=%2Fcdi%2Fdra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:54.744983 66289 io.go:119] "Command completed" command=[mv /cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json.tmp /cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json] stdout="" stderr="" err=<nil> I0314 13:58:54.745037 66289 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-2381.k8s.io/test=claim-c616330f-07b6-4656-b524-95a90bb804cb],}" < Exit [It] supports external claim referenced by multiple pods - test/e2e/dra/dra.go:196 @ 03/14/23 13:58:57.892 (6.186s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:58:57.892 Mar 14 13:58:57.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:58:57.897 (4ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 13:58:57.897 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 13:58:57.903 STEP: deleting *v1.Pod dra-2381/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 13:58:57.908 STEP: deleting *v1.Pod dra-2381/tester-2 - test/e2e/dra/dra.go:780 @ 03/14/23 13:58:57.921 STEP: deleting *v1.Pod dra-2381/tester-3 - test/e2e/dra/dra.go:780 @ 03/14/23 13:58:57.93 I0314 13:58:59.432804 66289 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-2381,ClaimUid:c616330f-07b6-4656-b524-95a90bb804cb,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json on node kind-worker - test/e2e/dra/deploy.go:221 @ 03/14/23 13:58:59.432 Mar 14 13:58:59.432: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:59.434: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:59.434: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2381/pods/dra-test-driver-86zdr/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:59.462645 66289 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-2381,ClaimUid:c616330f-07b6-4656-b524-95a90bb804cb,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" I0314 13:58:59.537425 66289 io.go:119] "Command completed" command=[rm -rf /cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json] stdout="" stderr="" err=<nil> I0314 13:58:59.537468 66289 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting CDI file /cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 13:58:59.537 Mar 14 13:58:59.537: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:59.539: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:59.539: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2381/pods/dra-test-driver-lgb7f/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:59.668704 66289 io.go:119] "Command completed" command=[rm -rf /cdi/dra-2381.k8s.io-c616330f-07b6-4656-b524-95a90bb804cb.json] stdout="" stderr="" err=<nil> I0314 13:58:59.668752 66289 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting *v1alpha2.ResourceClaim dra-2381/external-claim - test/e2e/dra/dra.go:796 @ 03/14/23 13:59:01.984 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:59:02.011 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:59:02.011 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 13:59:02.011 E0314 13:59:02.029752 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:02.046097 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:02.068388 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:02.097524 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:02.147829 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:02.240764 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:02.413729 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:02.747468 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:03.423942 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:04.741836 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:07.308149 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:12.434094 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:22.679463 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" E0314 13:59:43.164995 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2381/external-claim" [FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:59:01Z" finalizers: - dra-2381.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2381.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:51Z" name: external-claim namespace: dra-2381 resourceVersion: "1282" uid: c616330f-07b6-4656-b524-95a90bb804cb spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2381-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2381.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:00:02.013 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:00:02.013 (1m4.117s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:00:02.013 I0314 14:00:02.015101 66289 controller.go:310] "resource controller: Shutting down" driver="dra-2381.k8s.io" E0314 14:00:02.018235 66289 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" E0314 14:00:02.018488 66289 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-2381/dra-test-driver-lgb7f" E0314 14:00:02.019477 66289 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-2381/dra-test-driver-86zdr" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:00:02.019 (6ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-2381/dra-test-driver | create.go:156 @ 03/14/23 14:00:02.019 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-2381/dra-test-driver | create.go:156 @ 03/14/23 14:00:02.046 (27ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:00:02.046 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:00:02.046 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:00:02.046 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:00:02.047 STEP: Collecting events from namespace "dra-2381". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:00:02.047 STEP: Found 37 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:00:02.054 Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-86zdr Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-86zdr: {default-scheduler } Scheduled: Successfully assigned dra-2381/dra-test-driver-86zdr to kind-worker Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-lgb7f Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-86zdr: {kubelet kind-worker} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-lgb7f: {default-scheduler } Scheduled: Successfully assigned dra-2381/dra-test-driver-lgb7f to kind-worker2 Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-lgb7f: {kubelet kind-worker2} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-86zdr: {kubelet kind-worker} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 191.928764ms (1.62793118s including waiting) Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-86zdr: {kubelet kind-worker} Created: Created container registrar Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-86zdr: {kubelet kind-worker} Started: Started container registrar Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-86zdr: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-86zdr: {kubelet kind-worker} Created: Created container plugin Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-86zdr: {kubelet kind-worker} Started: Started container plugin Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-lgb7f: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-lgb7f: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-lgb7f: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-lgb7f: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-lgb7f: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-lgb7f: {kubelet kind-worker2} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 186.525301ms (1.337130071s including waiting) Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-2: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-3: {default-scheduler } Scheduled: Successfully assigned dra-2381/tester-3 to kind-worker2 Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for tester-3: {kubelet kind-worker2} Pulling: Pulling image "registry.k8s.io/e2e-test-images/busybox:1.29-4" Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-2381/tester-1 to kind-worker Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-2: {default-scheduler } Scheduled: Successfully assigned dra-2381/tester-2 to kind-worker2 Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker2} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/busybox:1.29-4" in 644.266422ms (644.272739ms including waiting) Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-2: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-2: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:55 +0000 UTC - event for tester-1: {kubelet kind-worker} Started: Started container with-resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:55 +0000 UTC - event for tester-1: {kubelet kind-worker} Created: Created container with-resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:55 +0000 UTC - event for tester-1: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:55 +0000 UTC - event for tester-2: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-1: {kubelet kind-worker} Killing: Stopping container with-resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-2: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-3: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:00:02.054: INFO: At 2023-03-14 13:59:02 +0000 UTC - event for external-claim: {resource driver dra-2381.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "external-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:00:02.062: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:00:02.062: INFO: dra-test-driver-86zdr kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 14:00:02.062: INFO: dra-test-driver-lgb7f kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:46 +0000 UTC }] Mar 14 14:00:02.062: INFO: Mar 14 14:00:02.164: INFO: Logging node info for node kind-control-plane Mar 14 14:00:02.167: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:02.168: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:00:02.177: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:00:02.194: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.194: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:02.194: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.194: INFO: Container coredns ready: true, restart count 0 Mar 14 14:00:02.194: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.194: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:00:02.194: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.194: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:00:02.194: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.194: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:02.194: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.194: INFO: Container coredns ready: true, restart count 0 Mar 14 14:00:02.194: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.194: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:00:02.194: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.194: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:00:02.194: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.194: INFO: Container etcd ready: true, restart count 0 Mar 14 14:00:02.312: INFO: Latency metrics for node kind-control-plane Mar 14 14:00:02.312: INFO: Logging node info for node kind-worker Mar 14 14:00:02.318: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:02.319: INFO: Logging kubelet events for node kind-worker Mar 14 14:00:02.326: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:00:02.344: INFO: dra-test-driver-5j9fw started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.344: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.344: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.344: INFO: tester-1 started at 2023-03-14 13:58:55 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.344: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:00:02.344: INFO: dra-test-driver-t8wgt started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.344: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.344: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.344: INFO: dra-test-driver-8jmwc started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.344: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.344: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.344: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.344: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:02.344: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.344: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:02.344: INFO: dra-test-driver-6zxqg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.344: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.344: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.344: INFO: dra-test-driver-other-xtlpg started at 2023-03-14 13:58:51 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.344: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.344: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.344: INFO: dra-test-driver-4t66h started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.344: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.344: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.344: INFO: dra-test-driver-86zdr started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.344: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.344: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.344: INFO: dra-test-driver-wfhjf started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.344: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.344: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.344: INFO: dra-test-driver-r8q6p started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.344: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.344: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.756: INFO: Latency metrics for node kind-worker Mar 14 14:00:02.756: INFO: Logging node info for node kind-worker2 Mar 14 14:00:02.761: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:02.761: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:00:02.766: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:00:02.776: INFO: dra-test-driver-f8m4d started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.776: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.776: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.776: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.776: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:02.776: INFO: dra-test-driver-jmtw2 started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.776: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.776: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.776: INFO: dra-test-driver-ss4k7 started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.776: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.776: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.776: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.776: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:02.776: INFO: dra-test-driver-7qxsw started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.776: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.776: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.776: INFO: dra-test-driver-lgb7f started at 2023-03-14 13:58:46 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.776: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.776: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.776: INFO: dra-test-driver-other-mp779 started at 2023-03-14 13:58:51 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.776: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.776: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.844: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:00:02.844 (797ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:00:02.844 (797ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:00:02.844 STEP: Destroying namespace "dra-2381" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:00:02.844 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:00:02.852 (8ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:00:02.852 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:00:02.852 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\sdelayed\sallocation\ssupports\sinit\scontainers$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:13Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:23Z" finalizers: - dra-2540.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"d5f78aaf-bd49-4796-921c-44d938e4b0e2"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:01:13Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2540.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:01:14Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:14Z" name: tester-1-my-inline-claim namespace: dra-2540 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: d5f78aaf-bd49-4796-921c-44d938e4b0e2 resourceVersion: "3018" uid: 07cb9267-3910-4367-84df-0ee1d1a41787 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2540-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2540.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:25.337from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:09.14 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 14:01:09.14 Mar 14 14:01:09.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 14:01:09.141 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 14:01:09.16 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 14:01:09.165 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:09.17 (30ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:09.17 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:09.17 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:09.17 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 14:01:09.17 Mar 14 14:01:09.175: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:09.175 (5ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:09.175 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 14:01:09.175 I0314 14:01:09.176478 66267 controller.go:295] "resource controller: Starting" driver="dra-2540.k8s.io" Mar 14 14:01:09.177: INFO: creating *v1.ReplicaSet: dra-2540/dra-test-driver I0314 14:01:11.208680 66267 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-2540/dra-test-driver-b7dnq" I0314 14:01:11.208704 66267 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-2540/dra-test-driver-b7dnq" I0314 14:01:11.209893 66267 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" I0314 14:01:11.209912 66267 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 14:01:11.209 I0314 14:01:11.413658 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" requestID=1 request="&InfoRequest{}" I0314 14:01:11.413713 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2540.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2540.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:11.414429 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2540/dra-test-driver-b7dnq" requestID=1 request="&InfoRequest{}" I0314 14:01:11.414468 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2540/dra-test-driver-b7dnq" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2540.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2540.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:11.425816 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2540/dra-test-driver-b7dnq" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:11.425853 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2540/dra-test-driver-b7dnq" requestID=2 response="&RegistrationStatusResponse{}" I0314 14:01:11.427543 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:11.427594 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:13.21 (4.035s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:13.21 STEP: creating *v1alpha2.ResourceClass dra-2540-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:13.21 END STEP: creating *v1alpha2.ResourceClass dra-2540-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:13.216 (6ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:13.216 (6ms) > Enter [It] supports init containers - test/e2e/dra/dra.go:222 @ 03/14/23 14:01:13.216 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:13.216 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:13.222 (6ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:13.222 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:13.228 (6ms) STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:13.228 END STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:13.238 (9ms) I0314 14:01:18.280384 66267 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-2540,ClaimUid:07cb9267-3910-4367-84df-0ee1d1a41787,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-2540.k8s.io-07cb9267-3910-4367-84df-0ee1d1a41787.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-2540.k8s.io/test","devices":[{"name":"claim-07cb9267-3910-4367-84df-0ee1d1a41787","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:01:18.28 Mar 14 14:01:18.280: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:18.281: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:18.281: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2540/pods/dra-test-driver-vvn4m/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-2540.k8s.io-07cb9267-3910-4367-84df-0ee1d1a41787.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI1NDAuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tMDdjYjkyNjctMzkxMC00MzY3LTg0ZGYtMGVlMWQxYTQxNzg3IiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:18.396443 66267 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-2540.k8s.io-07cb9267-3910-4367-84df-0ee1d1a41787.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI1NDAuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tMDdjYjkyNjctMzkxMC00MzY3LTg0ZGYtMGVlMWQxYTQxNzg3IiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 14:01:18.396: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:18.397: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:18.397: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2540/pods/dra-test-driver-vvn4m/exec?command=mv&command=%2Fcdi%2Fdra-2540.k8s.io-07cb9267-3910-4367-84df-0ee1d1a41787.json.tmp&command=%2Fcdi%2Fdra-2540.k8s.io-07cb9267-3910-4367-84df-0ee1d1a41787.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:18.511400 66267 io.go:119] "Command completed" command=[mv /cdi/dra-2540.k8s.io-07cb9267-3910-4367-84df-0ee1d1a41787.json.tmp /cdi/dra-2540.k8s.io-07cb9267-3910-4367-84df-0ee1d1a41787.json] stdout="" stderr="" err=<nil> I0314 14:01:18.511462 66267 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-2540.k8s.io/test=claim-07cb9267-3910-4367-84df-0ee1d1a41787],}" < Exit [It] supports init containers - test/e2e/dra/dra.go:222 @ 03/14/23 14:01:21.27 (8.054s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:21.27 Mar 14 14:01:21.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:21.277 (7ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:21.277 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:01:21.29 STEP: deleting *v1.Pod dra-2540/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:21.298 I0314 14:01:22.624986 66267 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-2540,ClaimUid:07cb9267-3910-4367-84df-0ee1d1a41787,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-2540.k8s.io-07cb9267-3910-4367-84df-0ee1d1a41787.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 14:01:22.625 Mar 14 14:01:22.625: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:22.625: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:22.625: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2540/pods/dra-test-driver-vvn4m/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-2540.k8s.io-07cb9267-3910-4367-84df-0ee1d1a41787.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:22.720104 66267 io.go:119] "Command completed" command=[rm -rf /cdi/dra-2540.k8s.io-07cb9267-3910-4367-84df-0ee1d1a41787.json] stdout="" stderr="" err=<nil> I0314 14:01:22.720155 66267 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" requestID=2 response="&NodeUnprepareResourceResponse{}" E0314 14:01:23.484468 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:23.496317 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:23.513251 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:23.540301 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:23.586386 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:23.673330 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:23.844864 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:24.178984 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:24.835921 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:25.336 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:25.336 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:01:25.336 E0314 14:01:26.126342 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:28.703709 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:33.833249 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:01:44.082069 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" E0314 14:02:04.566845 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2540/tester-1-my-inline-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:13Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:23Z" finalizers: - dra-2540.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"d5f78aaf-bd49-4796-921c-44d938e4b0e2"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:01:13Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2540.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:01:14Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:14Z" name: tester-1-my-inline-claim namespace: dra-2540 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: d5f78aaf-bd49-4796-921c-44d938e4b0e2 resourceVersion: "3018" uid: 07cb9267-3910-4367-84df-0ee1d1a41787 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2540-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2540.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:25.337 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:02:25.337 (1m4.061s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:25.337 I0314 14:02:25.338303 66267 controller.go:310] "resource controller: Shutting down" driver="dra-2540.k8s.io" E0314 14:02:25.339070 66267 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" E0314 14:02:25.339182 66267 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-2540/dra-test-driver-b7dnq" E0314 14:02:25.339823 66267 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-2540/dra-test-driver-vvn4m" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:25.339 (2ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-2540/dra-test-driver | create.go:156 @ 03/14/23 14:02:25.339 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-2540/dra-test-driver | create.go:156 @ 03/14/23 14:02:25.351 (12ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:25.351 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:25.351 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:25.351 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:25.351 STEP: Collecting events from namespace "dra-2540". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:02:25.351 STEP: Found 28 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:02:25.355 Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-vvn4m Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-b7dnq Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-b7dnq: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-b7dnq: {kubelet kind-worker} Created: Created container registrar Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-b7dnq: {kubelet kind-worker} Started: Started container registrar Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-b7dnq: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-b7dnq: {kubelet kind-worker} Created: Created container plugin Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-b7dnq: {default-scheduler } Scheduled: Successfully assigned dra-2540/dra-test-driver-b7dnq to kind-worker Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-vvn4m: {default-scheduler } Scheduled: Successfully assigned dra-2540/dra-test-driver-vvn4m to kind-worker2 Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-vvn4m: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-vvn4m: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-vvn4m: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-vvn4m: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:09 +0000 UTC - event for dra-test-driver-vvn4m: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:10 +0000 UTC - event for dra-test-driver-b7dnq: {kubelet kind-worker} Started: Started container plugin Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:10 +0000 UTC - event for dra-test-driver-vvn4m: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:13 +0000 UTC - event for tester-1: {resource_claim } FailedResourceClaimCreation: PodResourceClaim my-inline-claim: resource claim template "tester-1": resourceclaimtemplate.resource.k8s.io "tester-1" not found Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:13 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: waiting for dynamic resource controller to create the resourceclaim "tester-1-my-inline-claim". no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:14 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:17 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-2540/tester-1 to kind-worker2 Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:18 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:18 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource-init Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:18 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource-init Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:19 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:19 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:19 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:02:25.355: INFO: At 2023-03-14 14:01:23 +0000 UTC - event for tester-1-my-inline-claim: {resource driver dra-2540.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "tester-1-my-inline-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:02:25.359: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:02:25.359: INFO: dra-test-driver-b7dnq kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:09 +0000 UTC }] Mar 14 14:02:25.359: INFO: dra-test-driver-vvn4m kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:09 +0000 UTC }] Mar 14 14:02:25.359: INFO: Mar 14 14:02:25.405: INFO: Logging node info for node kind-control-plane Mar 14 14:02:25.409: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:25.409: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:02:25.414: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:02:25.430: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.430: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:25.430: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.430: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:02:25.430: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.430: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:02:25.430: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.430: INFO: Container etcd ready: true, restart count 0 Mar 14 14:02:25.430: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.430: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:02:25.430: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.430: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:25.430: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.430: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:25.430: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.430: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:25.430: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.430: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:02:25.498: INFO: Latency metrics for node kind-control-plane Mar 14 14:02:25.498: INFO: Logging node info for node kind-worker Mar 14 14:02:25.502: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:25.502: INFO: Logging kubelet events for node kind-worker Mar 14 14:02:25.508: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:02:25.517: INFO: dra-test-driver-wsdzm started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.517: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.517: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.517: INFO: dra-test-driver-zg2wf started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.517: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.517: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.517: INFO: dra-test-driver-zrgsx started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.517: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.517: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.517: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.517: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:25.517: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.517: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:25.517: INFO: dra-test-driver-xrvr8 started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.517: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.517: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.517: INFO: dra-test-driver-dx7tb started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.517: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.517: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.517: INFO: dra-test-driver-9bgm4 started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.517: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.517: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.517: INFO: dra-test-driver-b7dnq started at 2023-03-14 14:01:09 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.517: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.517: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.573: INFO: Latency metrics for node kind-worker Mar 14 14:02:25.573: INFO: Logging node info for node kind-worker2 Mar 14 14:02:25.578: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:25.578: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:02:25.583: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:02:25.593: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.593: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:25.593: INFO: dra-test-driver-h6qql started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.593: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.593: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.593: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:25.593: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:25.593: INFO: dra-test-driver-9f5vg started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.593: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.593: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.593: INFO: dra-test-driver-nvf7d started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.593: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.593: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.593: INFO: dra-test-driver-lgdqg started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.593: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.593: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.593: INFO: dra-test-driver-vvn4m started at 2023-03-14 14:01:09 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.593: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.593: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.593: INFO: dra-test-driver-w6qnj started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:25.593: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:25.593: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:25.670: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:25.67 (319ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:25.67 (319ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:25.67 STEP: Destroying namespace "dra-2540" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:02:25.67 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:25.678 (8ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:25.678 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:25.678 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\sdelayed\sallocation\ssupports\sinline\sclaim\sreferenced\sby\smultiple\scontainers$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:00:07Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:00:18Z" finalizers: - dra-2710.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"c4b4167d-8d2a-4d39-a9e2-00ebdebeaaed"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:00:07Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2710.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:00:08Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:00:08Z" name: tester-1-my-inline-claim namespace: dra-2710 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: c4b4167d-8d2a-4d39-a9e2-00ebdebeaaed resourceVersion: "2299" uid: f4e9a462-2f4a-445a-8b14-dc51c1f851cf spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2710-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2710.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:01:19.301from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:00:03.105 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 14:00:03.105 Mar 14 14:00:03.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 14:00:03.106 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 14:00:03.122 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 14:00:03.126 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:00:03.13 (25ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:00:03.13 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:00:03.13 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:00:03.13 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 14:00:03.13 Mar 14 14:00:03.134: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:00:03.134 (4ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:00:03.134 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 14:00:03.134 I0314 14:00:03.135020 66289 controller.go:295] "resource controller: Starting" driver="dra-2710.k8s.io" Mar 14 14:00:03.136: INFO: creating *v1.ReplicaSet: dra-2710/dra-test-driver I0314 14:00:05.168408 66289 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" I0314 14:00:05.168440 66289 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" I0314 14:00:05.170086 66289 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-2710/dra-test-driver-twq8l" I0314 14:00:05.170120 66289 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-2710/dra-test-driver-twq8l" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 14:00:05.17 I0314 14:00:05.383666 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2710/dra-test-driver-twq8l" requestID=1 request="&InfoRequest{}" I0314 14:00:05.383712 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2710/dra-test-driver-twq8l" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2710.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2710.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:00:05.395540 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2710/dra-test-driver-twq8l" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:00:05.395590 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2710/dra-test-driver-twq8l" requestID=2 response="&RegistrationStatusResponse{}" I0314 14:00:05.498907 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" requestID=1 request="&InfoRequest{}" I0314 14:00:05.498951 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2710.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2710.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:00:05.523616 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:00:05.523654 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:00:07.171 (4.037s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:00:07.171 STEP: creating *v1alpha2.ResourceClass dra-2710-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:07.171 END STEP: creating *v1alpha2.ResourceClass dra-2710-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:07.18 (9ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:00:07.18 (9ms) > Enter [It] supports inline claim referenced by multiple containers - test/e2e/dra/dra.go:180 @ 03/14/23 14:00:07.18 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:07.18 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:07.187 (7ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:07.187 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:07.198 (11ms) STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:07.198 END STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:07.206 (8ms) I0314 14:00:12.247774 66289 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-2710,ClaimUid:f4e9a462-2f4a-445a-8b14-dc51c1f851cf,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-2710.k8s.io-f4e9a462-2f4a-445a-8b14-dc51c1f851cf.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-2710.k8s.io/test","devices":[{"name":"claim-f4e9a462-2f4a-445a-8b14-dc51c1f851cf","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:00:12.247 Mar 14 14:00:12.247: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:12.248: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:12.248: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2710/pods/dra-test-driver-c9jh8/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-2710.k8s.io-f4e9a462-2f4a-445a-8b14-dc51c1f851cf.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI3MTAuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZjRlOWE0NjItMmY0YS00NDVhLThiMTQtZGM1MWMxZjg1MWNmIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:12.329641 66289 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-2710.k8s.io-f4e9a462-2f4a-445a-8b14-dc51c1f851cf.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI3MTAuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZjRlOWE0NjItMmY0YS00NDVhLThiMTQtZGM1MWMxZjg1MWNmIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 14:00:12.329: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:12.330: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:12.330: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2710/pods/dra-test-driver-c9jh8/exec?command=mv&command=%2Fcdi%2Fdra-2710.k8s.io-f4e9a462-2f4a-445a-8b14-dc51c1f851cf.json.tmp&command=%2Fcdi%2Fdra-2710.k8s.io-f4e9a462-2f4a-445a-8b14-dc51c1f851cf.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:12.432466 66289 io.go:119] "Command completed" command=[mv /cdi/dra-2710.k8s.io-f4e9a462-2f4a-445a-8b14-dc51c1f851cf.json.tmp /cdi/dra-2710.k8s.io-f4e9a462-2f4a-445a-8b14-dc51c1f851cf.json] stdout="" stderr="" err=<nil> I0314 14:00:12.432523 66289 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-2710.k8s.io/test=claim-f4e9a462-2f4a-445a-8b14-dc51c1f851cf],}" < Exit [It] supports inline claim referenced by multiple containers - test/e2e/dra/dra.go:180 @ 03/14/23 14:00:15.26 (8.08s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:00:15.26 Mar 14 14:00:15.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:00:15.264 (4ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:00:15.264 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:00:15.269 STEP: deleting *v1.Pod dra-2710/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:00:15.273 I0314 14:00:17.314799 66289 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-2710,ClaimUid:f4e9a462-2f4a-445a-8b14-dc51c1f851cf,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-2710.k8s.io-f4e9a462-2f4a-445a-8b14-dc51c1f851cf.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 14:00:17.314 Mar 14 14:00:17.314: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:17.316: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:17.316: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2710/pods/dra-test-driver-c9jh8/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-2710.k8s.io-f4e9a462-2f4a-445a-8b14-dc51c1f851cf.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:17.427838 66289 io.go:119] "Command completed" command=[rm -rf /cdi/dra-2710.k8s.io-f4e9a462-2f4a-445a-8b14-dc51c1f851cf.json] stdout="" stderr="" err=<nil> I0314 14:00:17.427878 66289 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" requestID=2 response="&NodeUnprepareResourceResponse{}" E0314 14:00:18.137794 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:18.147288 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:18.163222 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:18.194753 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:18.239836 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:18.326129 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:18.496830 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:18.822781 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:00:19.3 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:00:19.3 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:00:19.3 E0314 14:00:19.469571 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:20.754540 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:23.319708 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:28.445075 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:38.691803 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" E0314 14:00:59.178357 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2710/tester-1-my-inline-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:00:07Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:00:18Z" finalizers: - dra-2710.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"c4b4167d-8d2a-4d39-a9e2-00ebdebeaaed"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:00:07Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2710.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:00:08Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:00:08Z" name: tester-1-my-inline-claim namespace: dra-2710 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: c4b4167d-8d2a-4d39-a9e2-00ebdebeaaed resourceVersion: "2299" uid: f4e9a462-2f4a-445a-8b14-dc51c1f851cf spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2710-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2710.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:01:19.301 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:19.301 (1m4.038s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:01:19.302 I0314 14:01:19.302870 66289 controller.go:310] "resource controller: Shutting down" driver="dra-2710.k8s.io" E0314 14:01:19.303837 66289 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-2710/dra-test-driver-c9jh8" E0314 14:01:19.304087 66289 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-2710/dra-test-driver-twq8l" E0314 14:01:19.306198 66289 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-2710/dra-test-driver-twq8l" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:01:19.306 (4ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-2710/dra-test-driver | create.go:156 @ 03/14/23 14:01:19.306 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-2710/dra-test-driver | create.go:156 @ 03/14/23 14:01:19.317 (12ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:01:19.317 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:01:19.317 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:01:19.317 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:01:19.317 STEP: Collecting events from namespace "dra-2710". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:01:19.317 STEP: Found 33 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:01:19.323 Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:03 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-twq8l Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:03 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-c9jh8 Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:03 +0000 UTC - event for dra-test-driver-c9jh8: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:03 +0000 UTC - event for dra-test-driver-c9jh8: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:03 +0000 UTC - event for dra-test-driver-c9jh8: {default-scheduler } Scheduled: Successfully assigned dra-2710/dra-test-driver-c9jh8 to kind-worker2 Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:03 +0000 UTC - event for dra-test-driver-twq8l: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:03 +0000 UTC - event for dra-test-driver-twq8l: {kubelet kind-worker} Created: Created container registrar Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:03 +0000 UTC - event for dra-test-driver-twq8l: {default-scheduler } Scheduled: Successfully assigned dra-2710/dra-test-driver-twq8l to kind-worker Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:04 +0000 UTC - event for dra-test-driver-c9jh8: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:04 +0000 UTC - event for dra-test-driver-c9jh8: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:04 +0000 UTC - event for dra-test-driver-c9jh8: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:04 +0000 UTC - event for dra-test-driver-c9jh8: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:04 +0000 UTC - event for dra-test-driver-twq8l: {kubelet kind-worker} Started: Started container registrar Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:04 +0000 UTC - event for dra-test-driver-twq8l: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:04 +0000 UTC - event for dra-test-driver-twq8l: {kubelet kind-worker} Created: Created container plugin Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:04 +0000 UTC - event for dra-test-driver-twq8l: {kubelet kind-worker} Started: Started container plugin Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:07 +0000 UTC - event for tester-1: {resource_claim } FailedResourceClaimCreation: PodResourceClaim my-inline-claim: resource claim template "tester-1": resourceclaimtemplate.resource.k8s.io "tester-1" not found Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:07 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: waiting for dynamic resource controller to create the resourceclaim "tester-1-my-inline-claim". no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:11 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-2710/tester-1 to kind-worker2 Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:12 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:12 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:12 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:12 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:12 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource-1 Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:13 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource-1 Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:13 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:13 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource-1-2 Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:13 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource-1-2 Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:16 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:16 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource-1 Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:16 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource-1-2 Mar 14 14:01:19.323: INFO: At 2023-03-14 14:00:18 +0000 UTC - event for tester-1-my-inline-claim: {resource driver dra-2710.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "tester-1-my-inline-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:01:19.326: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:01:19.326: INFO: dra-test-driver-c9jh8 kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:03 +0000 UTC }] Mar 14 14:01:19.326: INFO: dra-test-driver-twq8l kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:03 +0000 UTC }] Mar 14 14:01:19.326: INFO: Mar 14 14:01:19.388: INFO: Logging node info for node kind-control-plane Mar 14 14:01:19.392: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:19.392: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:01:19.399: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:01:19.412: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.412: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:19.412: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.412: INFO: Container coredns ready: true, restart count 0 Mar 14 14:01:19.412: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.412: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:01:19.412: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.412: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:19.412: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.412: INFO: Container coredns ready: true, restart count 0 Mar 14 14:01:19.412: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.412: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:01:19.412: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.412: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:01:19.412: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.412: INFO: Container etcd ready: true, restart count 0 Mar 14 14:01:19.412: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.412: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:01:19.469: INFO: Latency metrics for node kind-control-plane Mar 14 14:01:19.469: INFO: Logging node info for node kind-worker Mar 14 14:01:19.473: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:19.474: INFO: Logging kubelet events for node kind-worker Mar 14 14:01:19.486: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:01:19.501: INFO: dra-test-driver-8jmwc started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.501: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.501: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.501: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.501: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:19.501: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.501: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:19.501: INFO: dra-test-driver-6zxqg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.501: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.501: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.501: INFO: dra-test-driver-zrgsx started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.501: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.501: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.501: INFO: dra-test-driver-twq8l started at 2023-03-14 14:00:03 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.501: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.501: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.501: INFO: dra-test-driver-bvfg8 started at 2023-03-14 14:00:02 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.501: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.501: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.501: INFO: dra-test-driver-b7dnq started at 2023-03-14 14:01:09 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.501: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.501: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.501: INFO: dra-test-driver-xrvr8 started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.501: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.501: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.501: INFO: dra-test-driver-t74z8 started at 2023-03-14 14:00:07 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.501: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.501: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.697: INFO: Latency metrics for node kind-worker Mar 14 14:01:19.697: INFO: Logging node info for node kind-worker2 Mar 14 14:01:19.706: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:19.706: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:01:19.721: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:01:19.737: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.737: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:19.737: INFO: dra-test-driver-v6g2p started at 2023-03-14 14:00:07 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.737: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.737: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.737: INFO: dra-test-driver-ss4k7 started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.737: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.737: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.737: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.737: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:19.737: INFO: dra-test-driver-c9jh8 started at 2023-03-14 14:00:03 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.737: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.737: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.737: INFO: dra-test-driver-nvf7d started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.737: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.737: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.737: INFO: dra-test-driver-w277j started at 2023-03-14 14:00:02 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.737: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.737: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.737: INFO: dra-test-driver-vvn4m started at 2023-03-14 14:01:09 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.737: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.737: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.737: INFO: tester-1 started at 2023-03-14 14:01:17 +0000 UTC (1+1 container statuses recorded) Mar 14 14:01:19.737: INFO: Init container with-resource-init ready: true, restart count 0 Mar 14 14:01:19.737: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:01:19.841: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:01:19.841 (524ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:01:19.841 (524ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:01:19.841 STEP: Destroying namespace "dra-2710" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:01:19.841 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:01:19.848 (7ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:01:19.849 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:01:19.849 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\sdelayed\sallocation\ssupports\ssimple\spod\sreferencing\sexternal\sresource\sclaim$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:58:59Z" finalizers: - dra-9589.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-9589.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:51Z" name: external-claim namespace: dra-9589 resourceVersion: "1224" uid: f592fa33-5b07-444f-baad-6a92a388bc0a spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-9589-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-9589.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 13:59:59.794from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.388 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 13:58:45.388 Mar 14 13:58:45.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 13:58:45.389 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 13:58:45.506 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 13:58:45.51 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.531 (143ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.531 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.531 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.531 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 13:58:45.531 Mar 14 13:58:45.546: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.546 (15ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:45.546 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 13:58:45.546 Mar 14 13:58:45.547: INFO: creating *v1.ReplicaSet: dra-9589/dra-test-driver I0314 13:58:45.548469 66264 controller.go:295] "resource controller: Starting" driver="dra-9589.k8s.io" I0314 13:58:49.610176 66264 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" I0314 13:58:49.610209 66264 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" I0314 13:58:49.611321 66264 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-9589/dra-test-driver-xqsdd" I0314 13:58:49.611343 66264 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-9589/dra-test-driver-xqsdd" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 13:58:49.611 I0314 13:58:49.840345 66264 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" requestID=1 request="&InfoRequest{}" I0314 13:58:49.840392 66264 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-9589.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-9589.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:49.853833 66264 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:49.853865 66264 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" requestID=2 response="&RegistrationStatusResponse{}" I0314 13:58:49.865527 66264 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-9589/dra-test-driver-xqsdd" requestID=1 request="&InfoRequest{}" I0314 13:58:49.865567 66264 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-9589/dra-test-driver-xqsdd" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-9589.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-9589.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:49.893601 66264 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-9589/dra-test-driver-xqsdd" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:49.893633 66264 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-9589/dra-test-driver-xqsdd" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:51.611 (6.066s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.611 STEP: creating *v1alpha2.ResourceClass dra-9589-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.611 END STEP: creating *v1alpha2.ResourceClass dra-9589-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.632 (20ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.632 (21ms) > Enter [It] supports simple pod referencing external resource claim - test/e2e/dra/dra.go:188 @ 03/14/23 13:58:51.632 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.632 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.649 (17ms) STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.65 END STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.663 (14ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.663 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.686 (22ms) I0314 13:58:53.300040 66264 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-9589,ClaimUid:f592fa33-5b07-444f-baad-6a92a388bc0a,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-9589.k8s.io-f592fa33-5b07-444f-baad-6a92a388bc0a.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-9589.k8s.io/test","devices":[{"name":"claim-f592fa33-5b07-444f-baad-6a92a388bc0a","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 13:58:53.3 Mar 14 13:58:53.300: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:53.301: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:53.301: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-9589/pods/dra-test-driver-gjrmj/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-9589.k8s.io-f592fa33-5b07-444f-baad-6a92a388bc0a.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTk1ODkuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZjU5MmZhMzMtNWIwNy00NDRmLWJhYWQtNmE5MmEzODhiYzBhIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:53.492050 66264 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-9589.k8s.io-f592fa33-5b07-444f-baad-6a92a388bc0a.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTk1ODkuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZjU5MmZhMzMtNWIwNy00NDRmLWJhYWQtNmE5MmEzODhiYzBhIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 13:58:53.492: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:53.493: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:53.493: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-9589/pods/dra-test-driver-gjrmj/exec?command=mv&command=%2Fcdi%2Fdra-9589.k8s.io-f592fa33-5b07-444f-baad-6a92a388bc0a.json.tmp&command=%2Fcdi%2Fdra-9589.k8s.io-f592fa33-5b07-444f-baad-6a92a388bc0a.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:53.684879 66264 io.go:119] "Command completed" command=[mv /cdi/dra-9589.k8s.io-f592fa33-5b07-444f-baad-6a92a388bc0a.json.tmp /cdi/dra-9589.k8s.io-f592fa33-5b07-444f-baad-6a92a388bc0a.json] stdout="" stderr="" err=<nil> I0314 13:58:53.684929 66264 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-9589.k8s.io/test=claim-f592fa33-5b07-444f-baad-6a92a388bc0a],}" < Exit [It] supports simple pod referencing external resource claim - test/e2e/dra/dra.go:188 @ 03/14/23 13:58:55.742 (4.11s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:58:55.742 Mar 14 13:58:55.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:58:55.746 (4ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 13:58:55.746 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 13:58:55.753 STEP: deleting *v1.Pod dra-9589/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 13:58:55.759 I0314 13:58:58.998231 66264 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-9589,ClaimUid:f592fa33-5b07-444f-baad-6a92a388bc0a,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-9589.k8s.io-f592fa33-5b07-444f-baad-6a92a388bc0a.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 13:58:58.998 Mar 14 13:58:58.998: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:58.999: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:58.999: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-9589/pods/dra-test-driver-gjrmj/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-9589.k8s.io-f592fa33-5b07-444f-baad-6a92a388bc0a.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:59.205792 66264 io.go:119] "Command completed" command=[rm -rf /cdi/dra-9589.k8s.io-f592fa33-5b07-444f-baad-6a92a388bc0a.json] stdout="" stderr="" err=<nil> I0314 13:58:59.205845 66264 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting *v1alpha2.ResourceClaim dra-9589/external-claim - test/e2e/dra/dra.go:796 @ 03/14/23 13:58:59.784 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:58:59.792 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:58:59.792 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 13:58:59.792 E0314 13:58:59.798902 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:58:59.810048 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:58:59.830398 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:58:59.858586 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:58:59.909574 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:58:59.997710 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:59:00.167832 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:59:00.501755 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:59:01.149286 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:59:02.441002 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:59:05.042373 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:59:10.168371 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:59:20.413701 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" E0314 13:59:40.898702 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9589/external-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:58:59Z" finalizers: - dra-9589.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-9589.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:51Z" name: external-claim namespace: dra-9589 resourceVersion: "1224" uid: f592fa33-5b07-444f-baad-6a92a388bc0a spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-9589-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-9589.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 13:59:59.794 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 13:59:59.794 (1m4.048s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 13:59:59.794 I0314 13:59:59.794982 66264 controller.go:310] "resource controller: Shutting down" driver="dra-9589.k8s.io" E0314 13:59:59.795987 66264 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-9589/dra-test-driver-xqsdd" E0314 13:59:59.796002 66264 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-9589/dra-test-driver-xqsdd" E0314 13:59:59.796209 66264 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-9589/dra-test-driver-gjrmj" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 13:59:59.796 (2ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-9589/dra-test-driver | create.go:156 @ 03/14/23 13:59:59.796 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-9589/dra-test-driver | create.go:156 @ 03/14/23 13:59:59.826 (30ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 13:59:59.826 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 13:59:59.826 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 13:59:59.826 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 13:59:59.826 STEP: Collecting events from namespace "dra-9589". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 13:59:59.826 STEP: Found 25 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 13:59:59.833 Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-xqsdd Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-gjrmj Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-gjrmj: {default-scheduler } Scheduled: Successfully assigned dra-9589/dra-test-driver-gjrmj to kind-worker2 Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-xqsdd: {default-scheduler } Scheduled: Successfully assigned dra-9589/dra-test-driver-xqsdd to kind-worker Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-gjrmj: {kubelet kind-worker2} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-xqsdd: {kubelet kind-worker} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-gjrmj: {kubelet kind-worker2} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 272.506376ms (1.482615343s including waiting) Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-gjrmj: {kubelet kind-worker2} Created: Created container registrar Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-xqsdd: {kubelet kind-worker} Started: Started container registrar Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-xqsdd: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-xqsdd: {kubelet kind-worker} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 154.998362ms (1.152676774s including waiting) Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-xqsdd: {kubelet kind-worker} Created: Created container registrar Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-gjrmj: {kubelet kind-worker2} Created: Created container plugin Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-gjrmj: {kubelet kind-worker2} Started: Started container plugin Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-gjrmj: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-gjrmj: {kubelet kind-worker2} Started: Started container registrar Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-xqsdd: {kubelet kind-worker} Created: Created container plugin Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-xqsdd: {kubelet kind-worker} Started: Started container plugin Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-9589/tester-1 to kind-worker2 Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 13:59:59.833: INFO: At 2023-03-14 13:58:59 +0000 UTC - event for external-claim: {resource driver dra-9589.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "external-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 13:59:59.846: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 13:59:59.846: INFO: dra-test-driver-gjrmj kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 13:59:59.846: INFO: dra-test-driver-xqsdd kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 13:59:59.846: INFO: Mar 14 13:59:59.996: INFO: Logging node info for node kind-control-plane Mar 14 14:00:00.007: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:00.007: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:00:00.025: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:00:00.047: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.047: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:00.047: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.047: INFO: Container coredns ready: true, restart count 0 Mar 14 14:00:00.047: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.047: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:00:00.047: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.047: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:00:00.047: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.047: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:00:00.047: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.047: INFO: Container etcd ready: true, restart count 0 Mar 14 14:00:00.047: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.047: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:00:00.047: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.047: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:00.047: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.047: INFO: Container coredns ready: true, restart count 0 Mar 14 14:00:00.197: INFO: Latency metrics for node kind-control-plane Mar 14 14:00:00.197: INFO: Logging node info for node kind-worker Mar 14 14:00:00.213: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:00.214: INFO: Logging kubelet events for node kind-worker Mar 14 14:00:00.231: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:00:00.251: INFO: dra-test-driver-psxfr started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.252: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.252: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.252: INFO: dra-test-driver-4t66h started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.252: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.252: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.252: INFO: dra-test-driver-xqsdd started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.252: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.252: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.252: INFO: dra-test-driver-86zdr started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.252: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.252: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.252: INFO: dra-test-driver-5j9fw started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.252: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.252: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.252: INFO: tester-1 started at 2023-03-14 13:58:55 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.252: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:00:00.252: INFO: dra-test-driver-7wgnx started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.252: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.252: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.252: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.252: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:00.252: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.252: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:00.252: INFO: dra-test-driver-6zxqg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.252: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.252: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.252: INFO: dra-test-driver-other-xtlpg started at 2023-03-14 13:58:51 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.252: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.252: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.411: INFO: Latency metrics for node kind-worker Mar 14 14:00:00.411: INFO: Logging node info for node kind-worker2 Mar 14 14:00:00.416: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:00.416: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:00:00.422: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:00:00.451: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.451: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:00.451: INFO: dra-test-driver-7qxsw started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.451: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.451: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.451: INFO: dra-test-driver-lgb7f started at 2023-03-14 13:58:46 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.451: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.451: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.451: INFO: dra-test-driver-fb6kg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.451: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.451: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.451: INFO: dra-test-driver-other-mp779 started at 2023-03-14 13:58:51 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.451: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.451: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.451: INFO: dra-test-driver-n5z8m started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.451: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.451: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.451: INFO: dra-test-driver-f8m4d started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.451: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.451: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.451: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.451: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:00.451: INFO: dra-test-driver-4x8n7 started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.451: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.451: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.451: INFO: dra-test-driver-gjrmj started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.451: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.451: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.653: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:00:00.653 (827ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:00:00.653 (827ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:00:00.653 STEP: Destroying namespace "dra-9589" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:00:00.654 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:00:00.672 (19ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:00:00.672 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:00:00.672 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\sdelayed\sallocation\ssupports\ssimple\spod\sreferencing\sinline\sresource\sclaim$'
[FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:33Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:43Z" finalizers: - dra-2813.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"21bf76f8-8108-4a30-9235-aec92cb8952b"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:01:33Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2813.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:01:34Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:34Z" name: tester-1-my-inline-claim namespace: dra-2813 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: 21bf76f8-8108-4a30-9235-aec92cb8952b resourceVersion: "3526" uid: f5f2fe42-04fe-4f7d-a067-3711676dd852 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2813-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2813.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:45.302from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:29.136 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 14:01:29.136 Mar 14 14:01:29.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 14:01:29.137 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 14:01:29.152 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 14:01:29.156 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:29.161 (25ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:29.161 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:29.161 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:29.161 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 14:01:29.161 Mar 14 14:01:29.166: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:29.166 (5ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:29.166 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 14:01:29.166 I0314 14:01:29.167157 66269 controller.go:295] "resource controller: Starting" driver="dra-2813.k8s.io" Mar 14 14:01:29.168: INFO: creating *v1.ReplicaSet: dra-2813/dra-test-driver I0314 14:01:31.202412 66269 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" I0314 14:01:31.202435 66269 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" I0314 14:01:31.203901 66269 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-2813/dra-test-driver-zg2wf" I0314 14:01:31.203935 66269 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-2813/dra-test-driver-zg2wf" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 14:01:31.203 I0314 14:01:31.406357 66269 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" requestID=1 request="&InfoRequest{}" I0314 14:01:31.406420 66269 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2813.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2813.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:31.407182 66269 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2813/dra-test-driver-zg2wf" requestID=1 request="&InfoRequest{}" I0314 14:01:31.407225 66269 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2813/dra-test-driver-zg2wf" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2813.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2813.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:31.415293 66269 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2813/dra-test-driver-zg2wf" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:31.415324 66269 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2813/dra-test-driver-zg2wf" requestID=2 response="&RegistrationStatusResponse{}" I0314 14:01:31.415508 66269 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:31.415535 66269 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:33.204 (4.038s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:33.204 STEP: creating *v1alpha2.ResourceClass dra-2813-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:33.204 END STEP: creating *v1alpha2.ResourceClass dra-2813-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:33.21 (6ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:33.21 (6ms) > Enter [It] supports simple pod referencing inline resource claim - test/e2e/dra/dra.go:172 @ 03/14/23 14:01:33.21 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:33.21 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:33.217 (7ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:33.217 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:33.223 (6ms) STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:33.223 END STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:33.232 (9ms) I0314 14:01:38.289715 66269 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-2813,ClaimUid:f5f2fe42-04fe-4f7d-a067-3711676dd852,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-2813.k8s.io-f5f2fe42-04fe-4f7d-a067-3711676dd852.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-2813.k8s.io/test","devices":[{"name":"claim-f5f2fe42-04fe-4f7d-a067-3711676dd852","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:01:38.289 Mar 14 14:01:38.289: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:38.290: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:38.290: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2813/pods/dra-test-driver-9f5vg/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-2813.k8s.io-f5f2fe42-04fe-4f7d-a067-3711676dd852.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI4MTMuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZjVmMmZlNDItMDRmZS00ZjdkLWEwNjctMzcxMTY3NmRkODUyIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:38.396864 66269 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-2813.k8s.io-f5f2fe42-04fe-4f7d-a067-3711676dd852.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI4MTMuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZjVmMmZlNDItMDRmZS00ZjdkLWEwNjctMzcxMTY3NmRkODUyIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 14:01:38.396: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:38.397: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:38.397: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2813/pods/dra-test-driver-9f5vg/exec?command=mv&command=%2Fcdi%2Fdra-2813.k8s.io-f5f2fe42-04fe-4f7d-a067-3711676dd852.json.tmp&command=%2Fcdi%2Fdra-2813.k8s.io-f5f2fe42-04fe-4f7d-a067-3711676dd852.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:38.486861 66269 io.go:119] "Command completed" command=[mv /cdi/dra-2813.k8s.io-f5f2fe42-04fe-4f7d-a067-3711676dd852.json.tmp /cdi/dra-2813.k8s.io-f5f2fe42-04fe-4f7d-a067-3711676dd852.json] stdout="" stderr="" err=<nil> I0314 14:01:38.486908 66269 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-2813.k8s.io/test=claim-f5f2fe42-04fe-4f7d-a067-3711676dd852],}" < Exit [It] supports simple pod referencing inline resource claim - test/e2e/dra/dra.go:172 @ 03/14/23 14:01:41.26 (8.05s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:41.26 Mar 14 14:01:41.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:41.263 (3ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:41.263 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:01:41.268 STEP: deleting *v1.Pod dra-2813/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:41.272 I0314 14:01:42.514156 66269 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-2813,ClaimUid:f5f2fe42-04fe-4f7d-a067-3711676dd852,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-2813.k8s.io-f5f2fe42-04fe-4f7d-a067-3711676dd852.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 14:01:42.514 Mar 14 14:01:42.514: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:42.515: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:42.515: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2813/pods/dra-test-driver-9f5vg/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-2813.k8s.io-f5f2fe42-04fe-4f7d-a067-3711676dd852.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:42.626714 66269 io.go:119] "Command completed" command=[rm -rf /cdi/dra-2813.k8s.io-f5f2fe42-04fe-4f7d-a067-3711676dd852.json] stdout="" stderr="" err=<nil> I0314 14:01:42.626777 66269 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" requestID=2 response="&NodeUnprepareResourceResponse{}" E0314 14:01:43.635015 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:01:43.645319 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:01:43.669087 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:01:43.700091 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:01:43.748574 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:01:43.834415 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:01:44.002234 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:01:44.327916 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:01:44.972919 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:45.3 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:45.3 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:01:45.3 E0314 14:01:46.258149 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:01:48.823499 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:01:53.949706 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:02:04.195878 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" E0314 14:02:24.681677 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2813/tester-1-my-inline-claim" [FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:33Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:43Z" finalizers: - dra-2813.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"21bf76f8-8108-4a30-9235-aec92cb8952b"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:01:33Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2813.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:01:34Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:34Z" name: tester-1-my-inline-claim namespace: dra-2813 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: 21bf76f8-8108-4a30-9235-aec92cb8952b resourceVersion: "3526" uid: f5f2fe42-04fe-4f7d-a067-3711676dd852 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2813-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2813.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:45.302 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:02:45.302 (1m4.039s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:45.302 I0314 14:02:45.302560 66269 controller.go:310] "resource controller: Shutting down" driver="dra-2813.k8s.io" E0314 14:02:45.303348 66269 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-2813/dra-test-driver-9f5vg" E0314 14:02:45.303811 66269 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-2813/dra-test-driver-zg2wf" E0314 14:02:45.303819 66269 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-2813/dra-test-driver-zg2wf" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:45.303 (2ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-2813/dra-test-driver | create.go:156 @ 03/14/23 14:02:45.303 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-2813/dra-test-driver | create.go:156 @ 03/14/23 14:02:45.316 (13ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:45.316 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:45.316 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:45.316 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:45.317 STEP: Collecting events from namespace "dra-2813". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:02:45.317 STEP: Found 25 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:02:45.32 Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-9f5vg Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-zg2wf Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-9f5vg: {default-scheduler } Scheduled: Successfully assigned dra-2813/dra-test-driver-9f5vg to kind-worker2 Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-9f5vg: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-9f5vg: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-9f5vg: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-9f5vg: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-9f5vg: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-zg2wf: {default-scheduler } Scheduled: Successfully assigned dra-2813/dra-test-driver-zg2wf to kind-worker Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-zg2wf: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-zg2wf: {kubelet kind-worker} Created: Created container registrar Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-zg2wf: {kubelet kind-worker} Started: Started container registrar Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-zg2wf: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for dra-test-driver-zg2wf: {kubelet kind-worker} Created: Created container plugin Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:30 +0000 UTC - event for dra-test-driver-9f5vg: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:30 +0000 UTC - event for dra-test-driver-zg2wf: {kubelet kind-worker} Started: Started container plugin Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:33 +0000 UTC - event for tester-1: {resource_claim } FailedResourceClaimCreation: PodResourceClaim my-inline-claim: resource claim template "tester-1": resourceclaimtemplate.resource.k8s.io "tester-1" not found Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:33 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: waiting for dynamic resource controller to create the resourceclaim "tester-1-my-inline-claim". no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:34 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:37 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-2813/tester-1 to kind-worker2 Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:38 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:38 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:38 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:41 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:02:45.320: INFO: At 2023-03-14 14:01:43 +0000 UTC - event for tester-1-my-inline-claim: {resource driver dra-2813.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "tester-1-my-inline-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:02:45.325: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:02:45.325: INFO: dra-test-driver-9f5vg kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:29 +0000 UTC }] Mar 14 14:02:45.325: INFO: dra-test-driver-zg2wf kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:29 +0000 UTC }] Mar 14 14:02:45.325: INFO: Mar 14 14:02:45.374: INFO: Logging node info for node kind-control-plane Mar 14 14:02:45.377: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:45.377: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:02:45.382: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:02:45.392: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.392: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:45.392: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.392: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:45.392: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.392: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:02:45.392: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.392: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:02:45.392: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.392: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:02:45.392: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.392: INFO: Container etcd ready: true, restart count 0 Mar 14 14:02:45.392: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.392: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:02:45.392: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.392: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:45.392: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.392: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:45.459: INFO: Latency metrics for node kind-control-plane Mar 14 14:02:45.459: INFO: Logging node info for node kind-worker Mar 14 14:02:45.463: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:45.464: INFO: Logging kubelet events for node kind-worker Mar 14 14:02:45.468: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:02:45.474: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.474: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:45.474: INFO: dra-test-driver-zg2wf started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:45.474: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:45.474: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:45.474: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.474: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:45.521: INFO: Latency metrics for node kind-worker Mar 14 14:02:45.521: INFO: Logging node info for node kind-worker2 Mar 14 14:02:45.524: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:45.525: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:02:45.529: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:02:45.539: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.539: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:45.539: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:45.539: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:45.539: INFO: dra-test-driver-9f5vg started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:45.539: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:45.539: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:45.587: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:45.587 (271ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:45.587 (271ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:45.587 STEP: Destroying namespace "dra-2813" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:02:45.587 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:45.594 (7ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:45.594 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:45.594 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\simmediate\sallocation\ssupports\sexternal\sclaim\sreferenced\sby\smultiple\scontainers\sof\smultiple\spods$'
[FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:59:01Z" finalizers: - dra-2539.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2539.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:51Z" name: external-claim namespace: dra-2539 resourceVersion: "1275" uid: 680592de-9fc2-44b5-aae5-32e07aef824b spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2539-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2539.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:00:01.714from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.175 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 13:58:45.176 Mar 14 13:58:45.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 13:58:45.177 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 13:58:45.198 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 13:58:45.205 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.21 (35ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.21 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.21 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.21 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 13:58:45.21 Mar 14 13:58:45.216: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.216 (6ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:45.216 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 13:58:45.216 I0314 13:58:45.216936 66265 controller.go:295] "resource controller: Starting" driver="dra-2539.k8s.io" Mar 14 13:58:45.220: INFO: creating *v1.ReplicaSet: dra-2539/dra-test-driver I0314 13:58:49.254886 66265 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" I0314 13:58:49.254925 66265 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" I0314 13:58:49.256283 66265 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" I0314 13:58:49.256310 66265 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 13:58:49.256 I0314 13:58:49.458713 66265 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" requestID=1 request="&InfoRequest{}" I0314 13:58:49.458768 66265 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2539.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2539.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:49.459336 66265 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" requestID=1 request="&InfoRequest{}" I0314 13:58:49.459377 66265 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2539.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2539.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:49.471655 66265 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:49.471696 66265 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" requestID=2 response="&RegistrationStatusResponse{}" I0314 13:58:49.471729 66265 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:49.471754 66265 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:51.256 (6.04s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.256 STEP: creating *v1alpha2.ResourceClass dra-2539-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.256 END STEP: creating *v1alpha2.ResourceClass dra-2539-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.286 (30ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.286 (30ms) > Enter [It] supports external claim referenced by multiple containers of multiple pods - test/e2e/dra/dra.go:209 @ 03/14/23 13:58:51.286 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.286 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.295 (9ms) STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.295 END STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.324 (29ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.324 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.366 (42ms) STEP: creating *v1.Pod tester-2 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.366 END STEP: creating *v1.Pod tester-2 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.414 (47ms) STEP: creating *v1.Pod tester-3 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.414 END STEP: creating *v1.Pod tester-3 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.439 (25ms) I0314 13:58:51.917096 66265 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-2539,ClaimUid:680592de-9fc2-44b5-aae5-32e07aef824b,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json on node kind-worker: {"cdiVersion":"0.3.0","kind":"dra-2539.k8s.io/test","devices":[{"name":"claim-680592de-9fc2-44b5-aae5-32e07aef824b","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 13:58:51.917 Mar 14 13:58:51.917: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:51.918: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:51.918: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2539/pods/dra-test-driver-5j9fw/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI1MzkuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tNjgwNTkyZGUtOWZjMi00NGI1LWFhZTUtMzJlMDdhZWY4MjRiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:52.007104 66265 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI1MzkuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tNjgwNTkyZGUtOWZjMi00NGI1LWFhZTUtMzJlMDdhZWY4MjRiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 13:58:52.007: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:52.008: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:52.008: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2539/pods/dra-test-driver-5j9fw/exec?command=mv&command=%2Fcdi%2Fdra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json.tmp&command=%2Fcdi%2Fdra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:52.116877 66265 io.go:119] "Command completed" command=[mv /cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json.tmp /cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json] stdout="" stderr="" err=<nil> I0314 13:58:52.116946 66265 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-2539.k8s.io/test=claim-680592de-9fc2-44b5-aae5-32e07aef824b],}" I0314 13:58:53.205041 66265 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-2539,ClaimUid:680592de-9fc2-44b5-aae5-32e07aef824b,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-2539.k8s.io/test","devices":[{"name":"claim-680592de-9fc2-44b5-aae5-32e07aef824b","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 13:58:53.205 Mar 14 13:58:53.205: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:53.206: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:53.206: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2539/pods/dra-test-driver-7qxsw/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI1MzkuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tNjgwNTkyZGUtOWZjMi00NGI1LWFhZTUtMzJlMDdhZWY4MjRiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:53.359819 66265 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI1MzkuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tNjgwNTkyZGUtOWZjMi00NGI1LWFhZTUtMzJlMDdhZWY4MjRiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 13:58:53.359: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:53.360: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:53.361: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2539/pods/dra-test-driver-7qxsw/exec?command=mv&command=%2Fcdi%2Fdra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json.tmp&command=%2Fcdi%2Fdra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:53.531897 66265 io.go:119] "Command completed" command=[mv /cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json.tmp /cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json] stdout="" stderr="" err=<nil> I0314 13:58:53.531952 66265 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-2539.k8s.io/test=claim-680592de-9fc2-44b5-aae5-32e07aef824b],}" < Exit [It] supports external claim referenced by multiple containers of multiple pods - test/e2e/dra/dra.go:209 @ 03/14/23 13:58:57.606 (6.319s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:58:57.606 Mar 14 13:58:57.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:58:57.609 (4ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 13:58:57.609 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 13:58:57.616 STEP: deleting *v1.Pod dra-2539/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 13:58:57.621 STEP: deleting *v1.Pod dra-2539/tester-2 - test/e2e/dra/dra.go:780 @ 03/14/23 13:58:57.63 STEP: deleting *v1.Pod dra-2539/tester-3 - test/e2e/dra/dra.go:780 @ 03/14/23 13:58:57.643 I0314 13:58:59.223252 66265 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-2539,ClaimUid:680592de-9fc2-44b5-aae5-32e07aef824b,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 13:58:59.223 Mar 14 13:58:59.223: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:59.224: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:59.224: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2539/pods/dra-test-driver-7qxsw/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:59.307967 66265 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-2539,ClaimUid:680592de-9fc2-44b5-aae5-32e07aef824b,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" I0314 13:58:59.381270 66265 io.go:119] "Command completed" command=[rm -rf /cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json] stdout="" stderr="" err=<nil> I0314 13:58:59.381403 66265 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting CDI file /cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json on node kind-worker - test/e2e/dra/deploy.go:221 @ 03/14/23 13:58:59.381 Mar 14 13:58:59.381: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:59.383: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:59.383: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2539/pods/dra-test-driver-5j9fw/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:59.510809 66265 io.go:119] "Command completed" command=[rm -rf /cdi/dra-2539.k8s.io-680592de-9fc2-44b5-aae5-32e07aef824b.json] stdout="" stderr="" err=<nil> I0314 13:58:59.510867 66265 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting *v1alpha2.ResourceClaim dra-2539/external-claim - test/e2e/dra/dra.go:796 @ 03/14/23 13:59:01.697 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:59:01.713 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:59:01.713 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 13:59:01.713 E0314 13:59:01.724186 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:01.746588 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:01.769085 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:01.809064 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:01.882306 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:01.984666 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:02.155917 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:02.490438 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:03.164776 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:04.482182 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:07.051164 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:12.175884 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:22.420902 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" E0314 13:59:42.907607 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2539/external-claim" [FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:59:01Z" finalizers: - dra-2539.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2539.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:51Z" name: external-claim namespace: dra-2539 resourceVersion: "1275" uid: 680592de-9fc2-44b5-aae5-32e07aef824b spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2539-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2539.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:00:01.714 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:00:01.714 (1m4.105s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:00:01.714 I0314 14:00:01.715297 66265 controller.go:310] "resource controller: Shutting down" driver="dra-2539.k8s.io" E0314 14:00:01.716474 66265 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-2539/dra-test-driver-5j9fw" E0314 14:00:01.717460 66265 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" E0314 14:00:01.717511 66265 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-2539/dra-test-driver-7qxsw" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:00:01.718 (3ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-2539/dra-test-driver | create.go:156 @ 03/14/23 14:00:01.718 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-2539/dra-test-driver | create.go:156 @ 03/14/23 14:00:01.734 (16ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:00:01.734 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:00:01.734 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:00:01.734 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:00:01.734 STEP: Collecting events from namespace "dra-2539". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:00:01.734 STEP: Found 61 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:00:01.745 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-5j9fw Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-7qxsw Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-5j9fw: {default-scheduler } Scheduled: Successfully assigned dra-2539/dra-test-driver-5j9fw to kind-worker Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-5j9fw: {kubelet kind-worker} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-7qxsw: {default-scheduler } Scheduled: Successfully assigned dra-2539/dra-test-driver-7qxsw to kind-worker2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-7qxsw: {kubelet kind-worker2} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-5j9fw: {kubelet kind-worker} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 1.154010023s (1.154139458s including waiting) Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-5j9fw: {kubelet kind-worker} Created: Created container registrar Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-5j9fw: {kubelet kind-worker} Started: Started container registrar Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-5j9fw: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-5j9fw: {kubelet kind-worker} Created: Created container plugin Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-5j9fw: {kubelet kind-worker} Started: Started container plugin Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-7qxsw: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-7qxsw: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-7qxsw: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-7qxsw: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-7qxsw: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-7qxsw: {kubelet kind-worker2} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 1.118578427s (1.118692428s including waiting) Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: unallocated immediate resourceclaim. no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-2: {default-scheduler } FailedScheduling: 0/3 nodes are available: unallocated immediate resourceclaim. no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-3: {default-scheduler } Scheduled: Successfully assigned dra-2539/tester-3 to kind-worker Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-2539/tester-1 to kind-worker2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for tester-2: {default-scheduler } Scheduled: Successfully assigned dra-2539/tester-2 to kind-worker Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for tester-3: {kubelet kind-worker} Pulling: Pulling image "registry.k8s.io/e2e-test-images/busybox:1.29-4" Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-2: {kubelet kind-worker} Created: Created container with-resource Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-2: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-2: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-2: {kubelet kind-worker} Started: Started container with-resource Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker} Started: Started container with-resource-1-2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker} Created: Created container with-resource-1-2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker} Created: Created container with-resource-1 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker} Started: Started container with-resource-1 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker} Created: Created container with-resource Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker} Started: Started container with-resource Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:53 +0000 UTC - event for tester-3: {kubelet kind-worker} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/busybox:1.29-4" in 646.636994ms (646.654799ms including waiting) Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource-1 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource-1 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-2: {kubelet kind-worker} Started: Started container with-resource-1 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-2: {kubelet kind-worker} Started: Started container with-resource-1-2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-2: {kubelet kind-worker} Created: Created container with-resource-1-2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-2: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-2: {kubelet kind-worker} Created: Created container with-resource-1 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:55 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource-1-2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:55 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource-1-2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource-1 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource-1-2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-2: {kubelet kind-worker} Killing: Stopping container with-resource-1-2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-2: {kubelet kind-worker} Killing: Stopping container with-resource-1 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-2: {kubelet kind-worker} Killing: Stopping container with-resource Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-3: {kubelet kind-worker} Killing: Stopping container with-resource Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-3: {kubelet kind-worker} Killing: Stopping container with-resource-1-2 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-3: {kubelet kind-worker} Killing: Stopping container with-resource-1 Mar 14 14:00:01.745: INFO: At 2023-03-14 13:59:01 +0000 UTC - event for external-claim: {resource driver dra-2539.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "external-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:00:01.755: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:00:01.755: INFO: dra-test-driver-5j9fw kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 14:00:01.755: INFO: dra-test-driver-7qxsw kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 14:00:01.755: INFO: Mar 14 14:00:01.851: INFO: Logging node info for node kind-control-plane Mar 14 14:00:01.855: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:01.856: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:00:01.865: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:00:01.878: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:01.878: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:01.878: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:01.878: INFO: Container coredns ready: true, restart count 0 Mar 14 14:00:01.878: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:01.878: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:00:01.878: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:01.878: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:01.878: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:01.878: INFO: Container coredns ready: true, restart count 0 Mar 14 14:00:01.878: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:01.878: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:00:01.878: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:01.878: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:00:01.879: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:01.879: INFO: Container etcd ready: true, restart count 0 Mar 14 14:00:01.879: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:01.879: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:00:01.971: INFO: Latency metrics for node kind-control-plane Mar 14 14:00:01.971: INFO: Logging node info for node kind-worker Mar 14 14:00:01.976: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:01.976: INFO: Logging kubelet events for node kind-worker Mar 14 14:00:01.985: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:00:02.000: INFO: dra-test-driver-r8q6p started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.000: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.000: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.000: INFO: dra-test-driver-4t66h started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.000: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.000: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.000: INFO: dra-test-driver-86zdr started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.000: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.000: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.000: INFO: dra-test-driver-wfhjf started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.000: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.000: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.000: INFO: dra-test-driver-5j9fw started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.000: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.000: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.000: INFO: tester-1 started at 2023-03-14 13:58:55 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.000: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:00:02.000: INFO: dra-test-driver-t8wgt started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.000: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.000: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.000: INFO: dra-test-driver-8jmwc started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.000: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.000: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.000: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.000: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:02.000: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.000: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:02.000: INFO: dra-test-driver-6zxqg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.000: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.000: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.000: INFO: dra-test-driver-other-xtlpg started at 2023-03-14 13:58:51 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.000: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.000: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.596: INFO: Latency metrics for node kind-worker Mar 14 14:00:02.596: INFO: Logging node info for node kind-worker2 Mar 14 14:00:02.601: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:02.601: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:00:02.606: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:00:02.616: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.616: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:02.616: INFO: dra-test-driver-7qxsw started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.616: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.616: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.616: INFO: dra-test-driver-lgb7f started at 2023-03-14 13:58:46 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.616: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.616: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.616: INFO: dra-test-driver-other-mp779 started at 2023-03-14 13:58:51 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.616: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.616: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.616: INFO: dra-test-driver-f8m4d started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.616: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:02.616: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:02.616: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:02.616: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:02.616: INFO: dra-test-driver-jmtw2 started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.616: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.616: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.616: INFO: dra-test-driver-ss4k7 started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:02.616: INFO: Container plugin ready: false, restart count 0 Mar 14 14:00:02.616: INFO: Container registrar ready: false, restart count 0 Mar 14 14:00:02.675: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:00:02.675 (941ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:00:02.675 (941ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:00:02.675 STEP: Destroying namespace "dra-2539" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:00:02.675 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:00:02.682 (7ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:00:02.683 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:00:02.683 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\simmediate\sallocation\ssupports\sexternal\sclaim\sreferenced\sby\smultiple\spods$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:29Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:38Z" finalizers: - dra-8197.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-8197.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T14:01:29Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:29Z" name: external-claim namespace: dra-8197 resourceVersion: "3472" uid: d70ab23d-0c31-4513-a91e-13c3d12fe90d spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-8197-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-8197.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:38.01from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:23.785 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 14:01:23.785 Mar 14 14:01:23.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 14:01:23.786 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 14:01:23.803 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 14:01:23.807 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:23.811 (26ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:23.811 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:23.811 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:23.811 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 14:01:23.811 Mar 14 14:01:23.815: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:23.815 (4ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:23.815 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 14:01:23.815 I0314 14:01:23.815764 66265 controller.go:295] "resource controller: Starting" driver="dra-8197.k8s.io" Mar 14 14:01:23.816: INFO: creating *v1.ReplicaSet: dra-8197/dra-test-driver I0314 14:01:27.838618 66265 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" I0314 14:01:27.838641 66265 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" I0314 14:01:27.839606 66265 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" I0314 14:01:27.839619 66265 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 14:01:27.839 I0314 14:01:28.042170 66265 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" requestID=1 request="&InfoRequest{}" I0314 14:01:28.042211 66265 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-8197.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-8197.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:28.043113 66265 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" requestID=1 request="&InfoRequest{}" I0314 14:01:28.043165 66265 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-8197.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-8197.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:28.050979 66265 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:28.050979 66265 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:28.051012 66265 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" requestID=2 response="&RegistrationStatusResponse{}" I0314 14:01:28.051014 66265 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:29.84 (6.025s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:29.84 STEP: creating *v1alpha2.ResourceClass dra-8197-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.84 END STEP: creating *v1alpha2.ResourceClass dra-8197-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.85 (10ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:29.85 (10ms) > Enter [It] supports external claim referenced by multiple pods - test/e2e/dra/dra.go:196 @ 03/14/23 14:01:29.85 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.85 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.857 (7ms) STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.857 END STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.863 (7ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.864 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.873 (9ms) STEP: creating *v1.Pod tester-2 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.873 END STEP: creating *v1.Pod tester-2 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.881 (9ms) STEP: creating *v1.Pod tester-3 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.881 END STEP: creating *v1.Pod tester-3 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:29.891 (10ms) I0314 14:01:30.257907 66265 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-8197,ClaimUid:d70ab23d-0c31-4513-a91e-13c3d12fe90d,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-8197.k8s.io/test","devices":[{"name":"claim-d70ab23d-0c31-4513-a91e-13c3d12fe90d","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:01:30.257 Mar 14 14:01:30.257: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:30.258: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:30.258: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-8197/pods/dra-test-driver-h6qql/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTgxOTcuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZDcwYWIyM2QtMGMzMS00NTEzLWE5MWUtMTNjM2QxMmZlOTBkIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:30.360687 66265 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTgxOTcuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZDcwYWIyM2QtMGMzMS00NTEzLWE5MWUtMTNjM2QxMmZlOTBkIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 14:01:30.360: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:30.361: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:30.361: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-8197/pods/dra-test-driver-h6qql/exec?command=mv&command=%2Fcdi%2Fdra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json.tmp&command=%2Fcdi%2Fdra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:30.452807 66265 io.go:119] "Command completed" command=[mv /cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json.tmp /cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json] stdout="" stderr="" err=<nil> I0314 14:01:30.452866 66265 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-8197.k8s.io/test=claim-d70ab23d-0c31-4513-a91e-13c3d12fe90d],}" I0314 14:01:31.299314 66265 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-8197,ClaimUid:d70ab23d-0c31-4513-a91e-13c3d12fe90d,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json on node kind-worker: {"cdiVersion":"0.3.0","kind":"dra-8197.k8s.io/test","devices":[{"name":"claim-d70ab23d-0c31-4513-a91e-13c3d12fe90d","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:01:31.299 Mar 14 14:01:31.299: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:31.300: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:31.300: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-8197/pods/dra-test-driver-9bgm4/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTgxOTcuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZDcwYWIyM2QtMGMzMS00NTEzLWE5MWUtMTNjM2QxMmZlOTBkIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:31.431604 66265 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTgxOTcuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZDcwYWIyM2QtMGMzMS00NTEzLWE5MWUtMTNjM2QxMmZlOTBkIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 14:01:31.431: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:31.432: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:31.432: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-8197/pods/dra-test-driver-9bgm4/exec?command=mv&command=%2Fcdi%2Fdra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json.tmp&command=%2Fcdi%2Fdra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:31.557940 66265 io.go:119] "Command completed" command=[mv /cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json.tmp /cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json] stdout="" stderr="" err=<nil> I0314 14:01:31.558000 66265 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-8197.k8s.io/test=claim-d70ab23d-0c31-4513-a91e-13c3d12fe90d],}" < Exit [It] supports external claim referenced by multiple pods - test/e2e/dra/dra.go:196 @ 03/14/23 14:01:33.94 (4.09s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:33.94 Mar 14 14:01:33.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:33.944 (4ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:33.944 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:01:33.949 STEP: deleting *v1.Pod dra-8197/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:33.954 STEP: deleting *v1.Pod dra-8197/tester-2 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:33.962 STEP: deleting *v1.Pod dra-8197/tester-3 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:33.973 I0314 14:01:35.287015 66265 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-8197,ClaimUid:d70ab23d-0c31-4513-a91e-13c3d12fe90d,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json on node kind-worker - test/e2e/dra/deploy.go:221 @ 03/14/23 14:01:35.287 Mar 14 14:01:35.287: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:35.288: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:35.288: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-8197/pods/dra-test-driver-9bgm4/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:35.310070 66265 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-8197,ClaimUid:d70ab23d-0c31-4513-a91e-13c3d12fe90d,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" I0314 14:01:35.419682 66265 io.go:119] "Command completed" command=[rm -rf /cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json] stdout="" stderr="" err=<nil> I0314 14:01:35.419734 66265 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting CDI file /cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 14:01:35.419 Mar 14 14:01:35.419: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:35.420: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:35.420: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-8197/pods/dra-test-driver-h6qql/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:35.554212 66265 io.go:119] "Command completed" command=[rm -rf /cdi/dra-8197.k8s.io-d70ab23d-0c31-4513-a91e-13c3d12fe90d.json] stdout="" stderr="" err=<nil> I0314 14:01:35.554251 66265 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting *v1alpha2.ResourceClaim dra-8197/external-claim - test/e2e/dra/dra.go:796 @ 03/14/23 14:01:38.002 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:38.007 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:38.007 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:01:38.007 E0314 14:01:38.012608 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:38.021952 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:38.041844 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:38.066189 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:38.110728 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:38.196895 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:38.361718 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:38.687432 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:39.332783 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:40.618821 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:43.183964 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:48.309884 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:01:58.554630 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" E0314 14:02:19.039472 66265 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8197/external-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:29Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:38Z" finalizers: - dra-8197.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-8197.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T14:01:29Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:29Z" name: external-claim namespace: dra-8197 resourceVersion: "3472" uid: d70ab23d-0c31-4513-a91e-13c3d12fe90d spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-8197-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-8197.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:38.01 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:02:38.01 (1m4.066s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:38.01 I0314 14:02:38.010386 66265 controller.go:310] "resource controller: Shutting down" driver="dra-8197.k8s.io" E0314 14:02:38.011347 66265 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" E0314 14:02:38.011411 66265 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-8197/dra-test-driver-h6qql" E0314 14:02:38.011440 66265 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-8197/dra-test-driver-9bgm4" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:38.011 (2ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-8197/dra-test-driver | create.go:156 @ 03/14/23 14:02:38.011 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-8197/dra-test-driver | create.go:156 @ 03/14/23 14:02:38.021 (9ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:38.021 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:38.021 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:38.021 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:38.021 STEP: Collecting events from namespace "dra-8197". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:02:38.021 STEP: Found 34 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:02:38.026 Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:23 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-9bgm4 Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:23 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-h6qql Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:23 +0000 UTC - event for dra-test-driver-9bgm4: {default-scheduler } Scheduled: Successfully assigned dra-8197/dra-test-driver-9bgm4 to kind-worker Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:23 +0000 UTC - event for dra-test-driver-h6qql: {default-scheduler } Scheduled: Successfully assigned dra-8197/dra-test-driver-h6qql to kind-worker2 Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for dra-test-driver-9bgm4: {kubelet kind-worker} Created: Created container plugin Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for dra-test-driver-9bgm4: {kubelet kind-worker} Started: Started container registrar Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for dra-test-driver-9bgm4: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for dra-test-driver-9bgm4: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for dra-test-driver-9bgm4: {kubelet kind-worker} Created: Created container registrar Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for dra-test-driver-h6qql: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for dra-test-driver-h6qql: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for dra-test-driver-h6qql: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for dra-test-driver-h6qql: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:24 +0000 UTC - event for dra-test-driver-h6qql: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:25 +0000 UTC - event for dra-test-driver-9bgm4: {kubelet kind-worker} Started: Started container plugin Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:25 +0000 UTC - event for dra-test-driver-h6qql: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: unallocated immediate resourceclaim. no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for tester-2: {default-scheduler } Scheduled: Successfully assigned dra-8197/tester-2 to kind-worker2 Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for tester-3: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": Operation cannot be fulfilled on resourceclaims.resource.k8s.io "external-claim": the object has been modified; please apply your changes to the latest version and try again Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:30 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-8197/tester-1 to kind-worker Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:30 +0000 UTC - event for tester-2: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:30 +0000 UTC - event for tester-2: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:30 +0000 UTC - event for tester-2: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:30 +0000 UTC - event for tester-3: {default-scheduler } Scheduled: Successfully assigned dra-8197/tester-3 to kind-worker2 Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:31 +0000 UTC - event for tester-1: {kubelet kind-worker} Created: Created container with-resource Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:31 +0000 UTC - event for tester-1: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:31 +0000 UTC - event for tester-3: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:31 +0000 UTC - event for tester-3: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:31 +0000 UTC - event for tester-3: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:32 +0000 UTC - event for tester-1: {kubelet kind-worker} Started: Started container with-resource Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:33 +0000 UTC - event for tester-1: {kubelet kind-worker} Killing: Stopping container with-resource Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:33 +0000 UTC - event for tester-2: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:33 +0000 UTC - event for tester-3: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:02:38.026: INFO: At 2023-03-14 14:01:38 +0000 UTC - event for external-claim: {resource driver dra-8197.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "external-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:02:38.029: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:02:38.029: INFO: dra-test-driver-9bgm4 kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:23 +0000 UTC }] Mar 14 14:02:38.029: INFO: dra-test-driver-h6qql kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:23 +0000 UTC }] Mar 14 14:02:38.029: INFO: Mar 14 14:02:38.084: INFO: Logging node info for node kind-control-plane Mar 14 14:02:38.089: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:38.089: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:02:38.094: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:02:38.109: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.109: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:02:38.109: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.109: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:38.109: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.109: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:38.109: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.109: INFO: Container etcd ready: true, restart count 0 Mar 14 14:02:38.109: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.109: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:02:38.109: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.109: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:38.109: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.109: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:38.109: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.109: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:02:38.109: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.109: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:02:38.169: INFO: Latency metrics for node kind-control-plane Mar 14 14:02:38.169: INFO: Logging node info for node kind-worker Mar 14 14:02:38.173: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:38.173: INFO: Logging kubelet events for node kind-worker Mar 14 14:02:38.178: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:02:38.187: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.187: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:38.187: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.187: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:38.187: INFO: dra-test-driver-zg2wf started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:38.187: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:38.187: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:38.187: INFO: dra-test-driver-xrvr8 started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:38.187: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:38.187: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:38.187: INFO: dra-test-driver-9bgm4 started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:38.187: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:38.187: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:38.235: INFO: Latency metrics for node kind-worker Mar 14 14:02:38.235: INFO: Logging node info for node kind-worker2 Mar 14 14:02:38.239: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:38.240: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:02:38.245: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:02:38.253: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.253: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:38.253: INFO: dra-test-driver-h6qql started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:38.253: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:38.253: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:38.253: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:38.253: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:38.253: INFO: dra-test-driver-9f5vg started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:38.253: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:38.253: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:38.313: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:38.313 (292ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:38.313 (292ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:38.313 STEP: Destroying namespace "dra-8197" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:02:38.313 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:38.319 (6ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:38.319 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:38.319 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\simmediate\sallocation\ssupports\sinit\scontainers$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:26Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:33Z" finalizers: - dra-8543.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-8543.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:01:26Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:26Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"3fb06bf9-ba8c-424c-bf43-c5e36db207d6"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:01:26Z" name: tester-1-my-inline-claim namespace: dra-8543 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: 3fb06bf9-ba8c-424c-bf43-c5e36db207d6 resourceVersion: "3413" uid: 39169baa-eb05-4f4b-ae59-4d14142be9f0 spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-8543-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-8543.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:34.139from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:19.946 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 14:01:19.946 Mar 14 14:01:19.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 14:01:19.948 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 14:01:19.968 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 14:01:19.973 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:19.977 (31ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:19.977 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:19.977 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:19.977 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 14:01:19.977 Mar 14 14:01:19.981: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:19.981 (4ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:19.981 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 14:01:19.981 I0314 14:01:19.982266 66289 controller.go:295] "resource controller: Starting" driver="dra-8543.k8s.io" Mar 14 14:01:19.983: INFO: creating *v1.ReplicaSet: dra-8543/dra-test-driver I0314 14:01:24.018242 66289 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-8543/dra-test-driver-dx7tb" I0314 14:01:24.018280 66289 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-8543/dra-test-driver-dx7tb" I0314 14:01:24.019795 66289 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" I0314 14:01:24.019820 66289 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 14:01:24.019 I0314 14:01:24.223104 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" requestID=1 request="&InfoRequest{}" I0314 14:01:24.223159 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-8543.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-8543.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:24.223382 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-8543/dra-test-driver-dx7tb" requestID=1 request="&InfoRequest{}" I0314 14:01:24.223414 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-8543/dra-test-driver-dx7tb" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-8543.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-8543.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:24.225540 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-8543/dra-test-driver-dx7tb" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:24.225563 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-8543/dra-test-driver-dx7tb" requestID=2 response="&RegistrationStatusResponse{}" I0314 14:01:24.227466 66289 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:24.227495 66289 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:26.021 (6.039s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:26.021 STEP: creating *v1alpha2.ResourceClass dra-8543-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.021 END STEP: creating *v1alpha2.ResourceClass dra-8543-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.026 (5ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:26.026 (5ms) > Enter [It] supports init containers - test/e2e/dra/dra.go:222 @ 03/14/23 14:01:26.026 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.026 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.035 (9ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.035 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.043 (8ms) STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.043 END STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.064 (21ms) I0314 14:01:28.294170 66289 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-8543,ClaimUid:39169baa-eb05-4f4b-ae59-4d14142be9f0,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-8543.k8s.io-39169baa-eb05-4f4b-ae59-4d14142be9f0.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-8543.k8s.io/test","devices":[{"name":"claim-39169baa-eb05-4f4b-ae59-4d14142be9f0","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:01:28.294 Mar 14 14:01:28.294: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:28.295: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:28.295: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-8543/pods/dra-test-driver-w6qnj/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-8543.k8s.io-39169baa-eb05-4f4b-ae59-4d14142be9f0.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTg1NDMuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tMzkxNjliYWEtZWIwNS00ZjRiLWFlNTktNGQxNDE0MmJlOWYwIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:28.432419 66289 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-8543.k8s.io-39169baa-eb05-4f4b-ae59-4d14142be9f0.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTg1NDMuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tMzkxNjliYWEtZWIwNS00ZjRiLWFlNTktNGQxNDE0MmJlOWYwIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 14:01:28.432: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:28.433: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:28.433: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-8543/pods/dra-test-driver-w6qnj/exec?command=mv&command=%2Fcdi%2Fdra-8543.k8s.io-39169baa-eb05-4f4b-ae59-4d14142be9f0.json.tmp&command=%2Fcdi%2Fdra-8543.k8s.io-39169baa-eb05-4f4b-ae59-4d14142be9f0.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:28.606150 66289 io.go:119] "Command completed" command=[mv /cdi/dra-8543.k8s.io-39169baa-eb05-4f4b-ae59-4d14142be9f0.json.tmp /cdi/dra-8543.k8s.io-39169baa-eb05-4f4b-ae59-4d14142be9f0.json] stdout="" stderr="" err=<nil> I0314 14:01:28.606204 66289 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-8543.k8s.io/test=claim-39169baa-eb05-4f4b-ae59-4d14142be9f0],}" < Exit [It] supports init containers - test/e2e/dra/dra.go:222 @ 03/14/23 14:01:32.097 (6.071s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:32.097 Mar 14 14:01:32.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:32.102 (4ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:32.102 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:01:32.108 STEP: deleting *v1.Pod dra-8543/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:32.114 I0314 14:01:33.354717 66289 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-8543,ClaimUid:39169baa-eb05-4f4b-ae59-4d14142be9f0,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-8543.k8s.io-39169baa-eb05-4f4b-ae59-4d14142be9f0.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 14:01:33.354 Mar 14 14:01:33.354: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:33.355: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:33.355: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-8543/pods/dra-test-driver-w6qnj/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-8543.k8s.io-39169baa-eb05-4f4b-ae59-4d14142be9f0.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:33.451909 66289 io.go:119] "Command completed" command=[rm -rf /cdi/dra-8543.k8s.io-39169baa-eb05-4f4b-ae59-4d14142be9f0.json] stdout="" stderr="" err=<nil> I0314 14:01:33.451958 66289 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" requestID=2 response="&NodeUnprepareResourceResponse{}" E0314 14:01:33.861538 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:33.872511 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:33.890221 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:33.917735 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:33.962786 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:34.047331 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:34.137 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:34.137 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:01:34.137 E0314 14:01:34.212991 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:34.539006 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:35.193864 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:36.479301 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:39.043760 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:44.169138 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:01:54.413689 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" E0314 14:02:14.900436 66289 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-8543/tester-1-my-inline-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:26Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:33Z" finalizers: - dra-8543.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-8543.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:01:26Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:26Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"3fb06bf9-ba8c-424c-bf43-c5e36db207d6"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:01:26Z" name: tester-1-my-inline-claim namespace: dra-8543 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: 3fb06bf9-ba8c-424c-bf43-c5e36db207d6 resourceVersion: "3413" uid: 39169baa-eb05-4f4b-ae59-4d14142be9f0 spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-8543-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-8543.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:34.139 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:02:34.139 (1m2.037s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:34.139 I0314 14:02:34.139486 66289 controller.go:310] "resource controller: Shutting down" driver="dra-8543.k8s.io" E0314 14:02:34.140399 66289 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" E0314 14:02:34.140557 66289 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-8543/dra-test-driver-dx7tb" E0314 14:02:34.140679 66289 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-8543/dra-test-driver-w6qnj" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:34.142 (3ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-8543/dra-test-driver | create.go:156 @ 03/14/23 14:02:34.142 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-8543/dra-test-driver | create.go:156 @ 03/14/23 14:02:34.155 (14ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:34.155 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:34.156 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:34.156 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:34.156 STEP: Collecting events from namespace "dra-8543". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:02:34.156 STEP: Found 27 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:02:34.16 Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-w6qnj Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-dx7tb Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-dx7tb: {default-scheduler } Scheduled: Successfully assigned dra-8543/dra-test-driver-dx7tb to kind-worker Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-dx7tb: {kubelet kind-worker} Started: Started container registrar Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-dx7tb: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-dx7tb: {kubelet kind-worker} Created: Created container plugin Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-dx7tb: {kubelet kind-worker} Created: Created container registrar Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-dx7tb: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-w6qnj: {default-scheduler } Scheduled: Successfully assigned dra-8543/dra-test-driver-w6qnj to kind-worker2 Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-w6qnj: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-w6qnj: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-w6qnj: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-w6qnj: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-w6qnj: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for dra-test-driver-dx7tb: {kubelet kind-worker} Started: Started container plugin Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for dra-test-driver-w6qnj: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:26 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: waiting for dynamic resource controller to create the resourceclaim "tester-1-my-inline-claim". no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:26 +0000 UTC - event for tester-1: {resource_claim } FailedResourceClaimCreation: PodResourceClaim my-inline-claim: resource claim template "tester-1": resourceclaimtemplate.resource.k8s.io "tester-1" not found Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:27 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-8543/tester-1 to kind-worker2 Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:28 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:28 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource-init Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource-init Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:32 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:02:34.160: INFO: At 2023-03-14 14:01:33 +0000 UTC - event for tester-1-my-inline-claim: {resource driver dra-8543.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "tester-1-my-inline-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:02:34.170: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:02:34.170: INFO: dra-test-driver-dx7tb kind-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:20 +0000 UTC }] Mar 14 14:02:34.170: INFO: dra-test-driver-w6qnj kind-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:20 +0000 UTC }] Mar 14 14:02:34.170: INFO: Mar 14 14:02:34.232: INFO: Logging node info for node kind-control-plane Mar 14 14:02:34.236: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:34.237: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:02:34.245: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:02:34.258: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.258: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:02:34.258: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.258: INFO: Container etcd ready: true, restart count 0 Mar 14 14:02:34.258: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.258: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:02:34.258: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.258: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:34.258: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.258: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:34.258: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.258: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:02:34.258: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.258: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:34.258: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.258: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:02:34.258: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.258: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:34.328: INFO: Latency metrics for node kind-control-plane Mar 14 14:02:34.328: INFO: Logging node info for node kind-worker Mar 14 14:02:34.333: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:34.333: INFO: Logging kubelet events for node kind-worker Mar 14 14:02:34.342: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:02:34.353: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.353: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:34.353: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.353: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:34.353: INFO: dra-test-driver-zg2wf started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.353: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.353: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.353: INFO: dra-test-driver-xrvr8 started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.353: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.353: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.353: INFO: dra-test-driver-dx7tb started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.353: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.353: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.353: INFO: dra-test-driver-9bgm4 started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.353: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.353: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.353: INFO: dra-test-driver-wsdzm started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.353: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.353: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.455: INFO: Latency metrics for node kind-worker Mar 14 14:02:34.455: INFO: Logging node info for node kind-worker2 Mar 14 14:02:34.461: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:34.461: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:02:34.470: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:02:34.479: INFO: dra-test-driver-w6qnj started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.479: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.479: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.479: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.479: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:34.479: INFO: dra-test-driver-h6qql started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.479: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.479: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.479: INFO: dra-test-driver-lgdqg started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.479: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.479: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.479: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.479: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:34.479: INFO: dra-test-driver-9f5vg started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.479: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.479: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.583: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:34.583 (427ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:34.583 (427ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:34.583 STEP: Destroying namespace "dra-8543" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:02:34.583 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:34.595 (12ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:34.595 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:34.595 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\simmediate\sallocation\ssupports\sinline\sclaim\sreferenced\sby\smultiple\scontainers$'
[FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:00:11Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:00:18Z" finalizers: - dra-3050.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-3050.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:00:11Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:00:11Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"0218ecc1-cf21-42c2-853d-492c479358f1"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:00:11Z" name: tester-1-my-inline-claim namespace: dra-3050 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: 0218ecc1-cf21-42c2-853d-492c479358f1 resourceVersion: "2312" uid: 5fc0eee4-c976-4049-a435-84722429f1ee spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-3050-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-3050.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:01:19.474from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:00:07.313 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 14:00:07.313 Mar 14 14:00:07.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 14:00:07.314 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 14:00:07.33 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 14:00:07.334 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:00:07.338 (24ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:00:07.338 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:00:07.338 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:00:07.338 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 14:00:07.338 Mar 14 14:00:07.342: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:00:07.342 (4ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:00:07.342 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 14:00:07.342 I0314 14:00:07.343030 66283 controller.go:295] "resource controller: Starting" driver="dra-3050.k8s.io" Mar 14 14:00:07.343: INFO: creating *v1.ReplicaSet: dra-3050/dra-test-driver I0314 14:00:09.369288 66283 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" I0314 14:00:09.369363 66283 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" I0314 14:00:09.371104 66283 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-3050/dra-test-driver-v6g2p" I0314 14:00:09.371140 66283 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-3050/dra-test-driver-v6g2p" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 14:00:09.371 I0314 14:00:09.573930 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" requestID=1 request="&InfoRequest{}" I0314 14:00:09.573986 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-3050/dra-test-driver-v6g2p" requestID=1 request="&InfoRequest{}" I0314 14:00:09.573977 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-3050.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-3050.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:00:09.574032 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-3050/dra-test-driver-v6g2p" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-3050.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-3050.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:00:09.576026 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-3050/dra-test-driver-v6g2p" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:00:09.576058 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-3050/dra-test-driver-v6g2p" requestID=2 response="&RegistrationStatusResponse{}" I0314 14:00:09.581675 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:00:09.581705 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:00:11.372 (4.03s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:00:11.372 STEP: creating *v1alpha2.ResourceClass dra-3050-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:11.372 END STEP: creating *v1alpha2.ResourceClass dra-3050-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:11.377 (5ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:00:11.377 (5ms) > Enter [It] supports inline claim referenced by multiple containers - test/e2e/dra/dra.go:180 @ 03/14/23 14:00:11.377 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:11.377 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:11.382 (4ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:11.382 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:11.387 (5ms) STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:11.387 END STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:11.395 (9ms) I0314 14:00:13.257062 66283 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-3050,ClaimUid:5fc0eee4-c976-4049-a435-84722429f1ee,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-3050.k8s.io-5fc0eee4-c976-4049-a435-84722429f1ee.json on node kind-worker: {"cdiVersion":"0.3.0","kind":"dra-3050.k8s.io/test","devices":[{"name":"claim-5fc0eee4-c976-4049-a435-84722429f1ee","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:00:13.257 Mar 14 14:00:13.257: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:13.257: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:13.258: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-3050/pods/dra-test-driver-t74z8/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-3050.k8s.io-5fc0eee4-c976-4049-a435-84722429f1ee.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTMwNTAuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tNWZjMGVlZTQtYzk3Ni00MDQ5LWE0MzUtODQ3MjI0MjlmMWVlIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:13.356023 66283 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-3050.k8s.io-5fc0eee4-c976-4049-a435-84722429f1ee.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTMwNTAuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tNWZjMGVlZTQtYzk3Ni00MDQ5LWE0MzUtODQ3MjI0MjlmMWVlIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 14:00:13.356: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:13.357: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:13.357: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-3050/pods/dra-test-driver-t74z8/exec?command=mv&command=%2Fcdi%2Fdra-3050.k8s.io-5fc0eee4-c976-4049-a435-84722429f1ee.json.tmp&command=%2Fcdi%2Fdra-3050.k8s.io-5fc0eee4-c976-4049-a435-84722429f1ee.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:13.448765 66283 io.go:119] "Command completed" command=[mv /cdi/dra-3050.k8s.io-5fc0eee4-c976-4049-a435-84722429f1ee.json.tmp /cdi/dra-3050.k8s.io-5fc0eee4-c976-4049-a435-84722429f1ee.json] stdout="" stderr="" err=<nil> I0314 14:00:13.448852 66283 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-3050.k8s.io/test=claim-5fc0eee4-c976-4049-a435-84722429f1ee],}" < Exit [It] supports inline claim referenced by multiple containers - test/e2e/dra/dra.go:180 @ 03/14/23 14:00:15.432 (4.055s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:00:15.432 Mar 14 14:00:15.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:00:15.436 (4ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:00:15.436 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:00:15.443 STEP: deleting *v1.Pod dra-3050/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:00:15.447 I0314 14:00:17.920672 66283 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-3050,ClaimUid:5fc0eee4-c976-4049-a435-84722429f1ee,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-3050.k8s.io-5fc0eee4-c976-4049-a435-84722429f1ee.json on node kind-worker - test/e2e/dra/deploy.go:221 @ 03/14/23 14:00:17.92 Mar 14 14:00:17.920: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:17.921: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:17.921: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-3050/pods/dra-test-driver-t74z8/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-3050.k8s.io-5fc0eee4-c976-4049-a435-84722429f1ee.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:18.004070 66283 io.go:119] "Command completed" command=[rm -rf /cdi/dra-3050.k8s.io-5fc0eee4-c976-4049-a435-84722429f1ee.json] stdout="" stderr="" err=<nil> I0314 14:00:18.004110 66283 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" requestID=2 response="&NodeUnprepareResourceResponse{}" E0314 14:00:18.736124 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:18.746684 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:18.761750 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:18.786984 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:18.831952 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:18.918811 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:19.086757 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:19.412961 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:00:19.472 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:00:19.472 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:00:19.472 E0314 14:00:20.058173 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:21.344483 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:23.910157 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:29.035206 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:39.280635 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" E0314 14:00:59.766457 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3050/tester-1-my-inline-claim" [FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:00:11Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:00:18Z" finalizers: - dra-3050.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-3050.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:00:11Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:00:11Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"0218ecc1-cf21-42c2-853d-492c479358f1"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:00:11Z" name: tester-1-my-inline-claim namespace: dra-3050 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: 0218ecc1-cf21-42c2-853d-492c479358f1 resourceVersion: "2312" uid: 5fc0eee4-c976-4049-a435-84722429f1ee spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-3050-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-3050.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:01:19.474 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:19.474 (1m4.038s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:01:19.474 I0314 14:01:19.474720 66283 controller.go:310] "resource controller: Shutting down" driver="dra-3050.k8s.io" E0314 14:01:19.475742 66283 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-3050/dra-test-driver-v6g2p" E0314 14:01:19.475872 66283 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-3050/dra-test-driver-v6g2p" E0314 14:01:19.476099 66283 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-3050/dra-test-driver-t74z8" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:01:19.476 (2ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-3050/dra-test-driver | create.go:156 @ 03/14/23 14:01:19.476 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-3050/dra-test-driver | create.go:156 @ 03/14/23 14:01:19.502 (26ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:01:19.502 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:01:19.503 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:01:19.503 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:01:19.503 STEP: Collecting events from namespace "dra-3050". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:01:19.503 STEP: Found 32 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:01:19.507 Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:07 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-t74z8 Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:07 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-v6g2p Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:07 +0000 UTC - event for dra-test-driver-t74z8: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:07 +0000 UTC - event for dra-test-driver-t74z8: {kubelet kind-worker} Created: Created container registrar Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:07 +0000 UTC - event for dra-test-driver-t74z8: {default-scheduler } Scheduled: Successfully assigned dra-3050/dra-test-driver-t74z8 to kind-worker Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:07 +0000 UTC - event for dra-test-driver-v6g2p: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:07 +0000 UTC - event for dra-test-driver-v6g2p: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:07 +0000 UTC - event for dra-test-driver-v6g2p: {default-scheduler } Scheduled: Successfully assigned dra-3050/dra-test-driver-v6g2p to kind-worker2 Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-t74z8: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-t74z8: {kubelet kind-worker} Started: Started container plugin Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-t74z8: {kubelet kind-worker} Created: Created container plugin Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-t74z8: {kubelet kind-worker} Started: Started container registrar Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-v6g2p: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-v6g2p: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-v6g2p: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-v6g2p: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:11 +0000 UTC - event for tester-1: {resource_claim } FailedResourceClaimCreation: PodResourceClaim my-inline-claim: resource claim template "tester-1": resourceclaimtemplate.resource.k8s.io "tester-1" not found Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:11 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: waiting for dynamic resource controller to create the resourceclaim "tester-1-my-inline-claim". no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:12 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-3050/tester-1 to kind-worker Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:13 +0000 UTC - event for tester-1: {kubelet kind-worker} Created: Created container with-resource-1 Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:13 +0000 UTC - event for tester-1: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:13 +0000 UTC - event for tester-1: {kubelet kind-worker} Created: Created container with-resource Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:13 +0000 UTC - event for tester-1: {kubelet kind-worker} Started: Started container with-resource Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:13 +0000 UTC - event for tester-1: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:14 +0000 UTC - event for tester-1: {kubelet kind-worker} Started: Started container with-resource-1 Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:14 +0000 UTC - event for tester-1: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:14 +0000 UTC - event for tester-1: {kubelet kind-worker} Created: Created container with-resource-1-2 Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:14 +0000 UTC - event for tester-1: {kubelet kind-worker} Started: Started container with-resource-1-2 Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:16 +0000 UTC - event for tester-1: {kubelet kind-worker} Killing: Stopping container with-resource Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:16 +0000 UTC - event for tester-1: {kubelet kind-worker} Killing: Stopping container with-resource-1-2 Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:16 +0000 UTC - event for tester-1: {kubelet kind-worker} Killing: Stopping container with-resource-1 Mar 14 14:01:19.507: INFO: At 2023-03-14 14:00:18 +0000 UTC - event for tester-1-my-inline-claim: {resource driver dra-3050.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "tester-1-my-inline-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:01:19.514: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:01:19.514: INFO: dra-test-driver-t74z8 kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:07 +0000 UTC }] Mar 14 14:01:19.514: INFO: dra-test-driver-v6g2p kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:07 +0000 UTC }] Mar 14 14:01:19.514: INFO: Mar 14 14:01:19.612: INFO: Logging node info for node kind-control-plane Mar 14 14:01:19.631: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:19.631: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:01:19.669: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:01:19.697: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.697: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:19.697: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.697: INFO: Container coredns ready: true, restart count 0 Mar 14 14:01:19.697: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.697: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:01:19.697: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.697: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:01:19.697: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.697: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:19.697: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.697: INFO: Container coredns ready: true, restart count 0 Mar 14 14:01:19.697: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.697: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:01:19.697: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.697: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:01:19.697: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.697: INFO: Container etcd ready: true, restart count 0 Mar 14 14:01:19.800: INFO: Latency metrics for node kind-control-plane Mar 14 14:01:19.800: INFO: Logging node info for node kind-worker Mar 14 14:01:19.813: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:19.813: INFO: Logging kubelet events for node kind-worker Mar 14 14:01:19.818: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:01:19.828: INFO: tester-1 started at 2023-03-14 14:01:19 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.828: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:01:19.828: INFO: dra-test-driver-8jmwc started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.828: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.828: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.828: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.828: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:19.828: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.828: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:19.828: INFO: dra-test-driver-6zxqg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.828: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.828: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.828: INFO: dra-test-driver-zrgsx started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.828: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.828: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.828: INFO: dra-test-driver-twq8l started at 2023-03-14 14:00:03 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.828: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.828: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.828: INFO: dra-test-driver-bvfg8 started at 2023-03-14 14:00:02 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.828: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.828: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.828: INFO: dra-test-driver-b7dnq started at 2023-03-14 14:01:09 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.828: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.828: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.828: INFO: dra-test-driver-xrvr8 started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.828: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.828: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.828: INFO: dra-test-driver-t74z8 started at 2023-03-14 14:00:07 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.828: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.828: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.941: INFO: Latency metrics for node kind-worker Mar 14 14:01:19.941: INFO: Logging node info for node kind-worker2 Mar 14 14:01:19.949: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:19.949: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:01:19.956: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:01:19.970: INFO: tester-3 started at 2023-03-14 14:01:19 +0000 UTC (0+3 container statuses recorded) Mar 14 14:01:19.970: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:01:19.970: INFO: Container with-resource-1 ready: false, restart count 0 Mar 14 14:01:19.970: INFO: Container with-resource-1-2 ready: false, restart count 0 Mar 14 14:01:19.970: INFO: dra-test-driver-ss4k7 started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.970: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.970: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.970: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.970: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:19.970: INFO: dra-test-driver-v6g2p started at 2023-03-14 14:00:07 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.970: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.970: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.970: INFO: dra-test-driver-w277j started at 2023-03-14 14:00:02 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.970: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.970: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.970: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:19.970: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:19.970: INFO: dra-test-driver-c9jh8 started at 2023-03-14 14:00:03 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.970: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.970: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.970: INFO: dra-test-driver-nvf7d started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.970: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.970: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.970: INFO: dra-test-driver-vvn4m started at 2023-03-14 14:01:09 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:19.970: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:19.970: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:19.970: INFO: tester-1 started at 2023-03-14 14:01:17 +0000 UTC (1+1 container statuses recorded) Mar 14 14:01:19.970: INFO: Init container with-resource-init ready: true, restart count 0 Mar 14 14:01:19.970: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:01:20.041: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:01:20.041 (539ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:01:20.041 (539ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:01:20.041 STEP: Destroying namespace "dra-3050" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:01:20.041 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:01:20.052 (10ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:01:20.052 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:01:20.052 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\simmediate\sallocation\ssupports\ssimple\spod\sreferencing\sexternal\sresource\sclaim$'
[FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:26Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:34Z" finalizers: - dra-4217.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-4217.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T14:01:26Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:26Z" name: external-claim namespace: dra-4217 resourceVersion: "3430" uid: 4e0a4335-0bac-4f0e-aafe-b6c0ac442b67 spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-4217-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-4217.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:34.361from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:20.192 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 14:01:20.192 Mar 14 14:01:20.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 14:01:20.193 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 14:01:20.206 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 14:01:20.211 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:20.217 (25ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:20.217 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:20.217 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:20.217 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 14:01:20.217 Mar 14 14:01:20.221: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:20.221 (4ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:20.221 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 14:01:20.222 I0314 14:01:20.222435 66283 controller.go:295] "resource controller: Starting" driver="dra-4217.k8s.io" Mar 14 14:01:20.223: INFO: creating *v1.ReplicaSet: dra-4217/dra-test-driver I0314 14:01:24.267979 66283 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" I0314 14:01:24.268010 66283 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" I0314 14:01:24.269560 66283 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-4217/dra-test-driver-wsdzm" I0314 14:01:24.269593 66283 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-4217/dra-test-driver-wsdzm" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 14:01:24.269 I0314 14:01:24.474376 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" requestID=1 request="&InfoRequest{}" I0314 14:01:24.474449 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-4217.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-4217.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:24.474786 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-4217/dra-test-driver-wsdzm" requestID=1 request="&InfoRequest{}" I0314 14:01:24.474826 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-4217/dra-test-driver-wsdzm" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-4217.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-4217.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:24.478408 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:24.478439 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" requestID=2 response="&RegistrationStatusResponse{}" I0314 14:01:24.478574 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-4217/dra-test-driver-wsdzm" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:24.478603 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-4217/dra-test-driver-wsdzm" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:26.27 (6.049s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:26.27 STEP: creating *v1alpha2.ResourceClass dra-4217-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.27 END STEP: creating *v1alpha2.ResourceClass dra-4217-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.275 (5ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:26.275 (5ms) > Enter [It] supports simple pod referencing external resource claim - test/e2e/dra/dra.go:188 @ 03/14/23 14:01:26.275 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.275 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.281 (6ms) STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.281 END STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.288 (7ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.288 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:26.294 (7ms) I0314 14:01:28.320634 66283 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-4217,ClaimUid:4e0a4335-0bac-4f0e-aafe-b6c0ac442b67,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-4217.k8s.io-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-4217.k8s.io/test","devices":[{"name":"claim-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:01:28.32 Mar 14 14:01:28.320: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:28.321: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:28.322: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-4217/pods/dra-test-driver-lgdqg/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-4217.k8s.io-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTQyMTcuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tNGUwYTQzMzUtMGJhYy00ZjBlLWFhZmUtYjZjMGFjNDQyYjY3IiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:28.476073 66283 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-4217.k8s.io-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTQyMTcuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tNGUwYTQzMzUtMGJhYy00ZjBlLWFhZmUtYjZjMGFjNDQyYjY3IiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 14:01:28.476: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:28.477: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:28.477: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-4217/pods/dra-test-driver-lgdqg/exec?command=mv&command=%2Fcdi%2Fdra-4217.k8s.io-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67.json.tmp&command=%2Fcdi%2Fdra-4217.k8s.io-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:28.630851 66283 io.go:119] "Command completed" command=[mv /cdi/dra-4217.k8s.io-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67.json.tmp /cdi/dra-4217.k8s.io-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67.json] stdout="" stderr="" err=<nil> I0314 14:01:28.630904 66283 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-4217.k8s.io/test=claim-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67],}" < Exit [It] supports simple pod referencing external resource claim - test/e2e/dra/dra.go:188 @ 03/14/23 14:01:30.316 (4.04s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:30.316 Mar 14 14:01:30.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:30.32 (4ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:30.32 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:01:30.326 STEP: deleting *v1.Pod dra-4217/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:30.329 I0314 14:01:31.843924 66283 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-4217,ClaimUid:4e0a4335-0bac-4f0e-aafe-b6c0ac442b67,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-4217.k8s.io-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 14:01:31.843 Mar 14 14:01:31.843: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:01:31.845: INFO: ExecWithOptions: Clientset creation Mar 14 14:01:31.845: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-4217/pods/dra-test-driver-lgdqg/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-4217.k8s.io-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:01:31.941639 66283 io.go:119] "Command completed" command=[rm -rf /cdi/dra-4217.k8s.io-4e0a4335-0bac-4f0e-aafe-b6c0ac442b67.json] stdout="" stderr="" err=<nil> I0314 14:01:31.941693 66283 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting *v1alpha2.ResourceClaim dra-4217/external-claim - test/e2e/dra/dra.go:796 @ 03/14/23 14:01:34.352 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:34.359 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:34.359 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:01:34.359 E0314 14:01:34.363489 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:34.373749 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:34.388372 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:34.413578 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:34.457870 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:34.543904 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:34.709190 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:35.038879 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:35.685474 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:36.970994 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:39.536934 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:44.662477 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:01:54.909033 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" E0314 14:02:15.393579 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-4217/external-claim" [FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:26Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:34Z" finalizers: - dra-4217.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-4217.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T14:01:26Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:26Z" name: external-claim namespace: dra-4217 resourceVersion: "3430" uid: 4e0a4335-0bac-4f0e-aafe-b6c0ac442b67 spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-4217-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-4217.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:34.361 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 14:02:34.361 (1m4.041s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:34.361 I0314 14:02:34.361244 66283 controller.go:310] "resource controller: Shutting down" driver="dra-4217.k8s.io" E0314 14:02:34.362346 66283 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-4217/dra-test-driver-wsdzm" E0314 14:02:34.362430 66283 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-4217/dra-test-driver-lgdqg" E0314 14:02:34.362441 66283 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-4217/dra-test-driver-wsdzm" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:34.362 (2ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-4217/dra-test-driver | create.go:156 @ 03/14/23 14:02:34.362 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-4217/dra-test-driver | create.go:156 @ 03/14/23 14:02:34.378 (15ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:34.378 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:34.378 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:34.378 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:34.378 STEP: Collecting events from namespace "dra-4217". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:02:34.378 STEP: Found 23 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:02:34.382 Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-wsdzm Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-lgdqg Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-lgdqg: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-lgdqg: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-lgdqg: {default-scheduler } Scheduled: Successfully assigned dra-4217/dra-test-driver-lgdqg to kind-worker2 Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-wsdzm: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-wsdzm: {kubelet kind-worker} Created: Created container registrar Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:20 +0000 UTC - event for dra-test-driver-wsdzm: {default-scheduler } Scheduled: Successfully assigned dra-4217/dra-test-driver-wsdzm to kind-worker Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for dra-test-driver-lgdqg: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for dra-test-driver-lgdqg: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for dra-test-driver-lgdqg: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for dra-test-driver-lgdqg: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for dra-test-driver-wsdzm: {kubelet kind-worker} Created: Created container plugin Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for dra-test-driver-wsdzm: {kubelet kind-worker} Started: Started container plugin Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for dra-test-driver-wsdzm: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:21 +0000 UTC - event for dra-test-driver-wsdzm: {kubelet kind-worker} Started: Started container registrar Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:26 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: unallocated immediate resourceclaim. no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:27 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-4217/tester-1 to kind-worker2 Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:28 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:28 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:29 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:30 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 14:02:34.382: INFO: At 2023-03-14 14:01:34 +0000 UTC - event for external-claim: {resource driver dra-4217.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "external-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:02:34.387: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:02:34.387: INFO: dra-test-driver-lgdqg kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:20 +0000 UTC }] Mar 14 14:02:34.387: INFO: dra-test-driver-wsdzm kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:20 +0000 UTC }] Mar 14 14:02:34.387: INFO: Mar 14 14:02:34.449: INFO: Logging node info for node kind-control-plane Mar 14 14:02:34.454: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:34.454: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:02:34.459: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:02:34.468: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.468: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:02:34.468: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.468: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:02:34.468: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.468: INFO: Container etcd ready: true, restart count 0 Mar 14 14:02:34.468: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.468: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:02:34.468: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.468: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:34.468: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.468: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:34.468: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.468: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:34.468: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.468: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:34.468: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.468: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:02:34.548: INFO: Latency metrics for node kind-control-plane Mar 14 14:02:34.548: INFO: Logging node info for node kind-worker Mar 14 14:02:34.554: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:34.554: INFO: Logging kubelet events for node kind-worker Mar 14 14:02:34.560: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:02:34.569: INFO: dra-test-driver-wsdzm started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.569: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.569: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.569: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.569: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:34.569: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.569: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:34.569: INFO: dra-test-driver-zg2wf started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.569: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.569: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.569: INFO: dra-test-driver-xrvr8 started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.569: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.569: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.569: INFO: dra-test-driver-dx7tb started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.569: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.569: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.569: INFO: dra-test-driver-9bgm4 started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.569: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.569: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.644: INFO: Latency metrics for node kind-worker Mar 14 14:02:34.644: INFO: Logging node info for node kind-worker2 Mar 14 14:02:34.647: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:34.647: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:02:34.652: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:02:34.662: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.662: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:34.662: INFO: dra-test-driver-9f5vg started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.662: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.662: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.662: INFO: dra-test-driver-lgdqg started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.662: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.662: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.662: INFO: dra-test-driver-w6qnj started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.662: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.662: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.662: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:34.662: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:34.662: INFO: dra-test-driver-h6qql started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:34.662: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:34.662: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:34.717: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:34.717 (339ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:34.717 (339ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:34.717 STEP: Destroying namespace "dra-4217" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:02:34.717 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:34.723 (6ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:34.724 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:34.724 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\scluster\swith\simmediate\sallocation\ssupports\ssimple\spod\sreferencing\sinline\sresource\sclaim$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:58:59Z" finalizers: - dra-1377.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-1377.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"1c12376a-9ab0-436d-a5a0-ab9f404a91fd"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T13:58:51Z" name: tester-1-my-inline-claim namespace: dra-1377 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: 1c12376a-9ab0-436d-a5a0-ab9f404a91fd resourceVersion: "1215" uid: a136e872-ec92-4801-b43f-e5e07033abd1 spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-1377-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-1377.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 13:59:59.729from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.37 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 13:58:45.37 Mar 14 13:58:45.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 13:58:45.372 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 13:58:45.441 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 13:58:45.446 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.472 (102ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.472 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.472 (0s) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.472 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 13:58:45.472 Mar 14 13:58:45.486: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.486 (14ms) > Enter [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:45.486 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 13:58:45.486 I0314 13:58:45.486910 66283 controller.go:295] "resource controller: Starting" driver="dra-1377.k8s.io" Mar 14 13:58:45.490: INFO: creating *v1.ReplicaSet: dra-1377/dra-test-driver I0314 13:58:49.561558 66283 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" I0314 13:58:49.561580 66283 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" I0314 13:58:49.562509 66283 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-1377/dra-test-driver-psxfr" I0314 13:58:49.562526 66283 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-1377/dra-test-driver-psxfr" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 13:58:49.562 I0314 13:58:49.801445 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-1377/dra-test-driver-psxfr" requestID=1 request="&InfoRequest{}" I0314 13:58:49.801493 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-1377/dra-test-driver-psxfr" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-1377.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-1377.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:49.814214 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-1377/dra-test-driver-psxfr" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:49.814246 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-1377/dra-test-driver-psxfr" requestID=2 response="&RegistrationStatusResponse{}" I0314 13:58:49.989681 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" requestID=1 request="&InfoRequest{}" I0314 13:58:49.989736 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-1377.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-1377.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:50.001469 66283 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:50.001504 66283 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] cluster - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:51.562 (6.076s) > Enter [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.562 STEP: creating *v1alpha2.ResourceClass dra-1377-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.563 END STEP: creating *v1alpha2.ResourceClass dra-1377-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.574 (12ms) < Exit [BeforeEach] cluster - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.574 (12ms) > Enter [It] supports simple pod referencing inline resource claim - test/e2e/dra/dra.go:172 @ 03/14/23 13:58:51.574 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.575 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.596 (21ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.596 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.613 (17ms) STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.613 END STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.633 (20ms) I0314 13:58:53.257496 66283 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-1377,ClaimUid:a136e872-ec92-4801-b43f-e5e07033abd1,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-1377.k8s.io-a136e872-ec92-4801-b43f-e5e07033abd1.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-1377.k8s.io/test","devices":[{"name":"claim-a136e872-ec92-4801-b43f-e5e07033abd1","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 13:58:53.257 Mar 14 13:58:53.257: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:53.258: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:53.258: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-1377/pods/dra-test-driver-4x8n7/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-1377.k8s.io-a136e872-ec92-4801-b43f-e5e07033abd1.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTEzNzcuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tYTEzNmU4NzItZWM5Mi00ODAxLWI0M2YtZTVlMDcwMzNhYmQxIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:53.467671 66283 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-1377.k8s.io-a136e872-ec92-4801-b43f-e5e07033abd1.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTEzNzcuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tYTEzNmU4NzItZWM5Mi00ODAxLWI0M2YtZTVlMDcwMzNhYmQxIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 13:58:53.467: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:53.468: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:53.468: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-1377/pods/dra-test-driver-4x8n7/exec?command=mv&command=%2Fcdi%2Fdra-1377.k8s.io-a136e872-ec92-4801-b43f-e5e07033abd1.json.tmp&command=%2Fcdi%2Fdra-1377.k8s.io-a136e872-ec92-4801-b43f-e5e07033abd1.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:53.671096 66283 io.go:119] "Command completed" command=[mv /cdi/dra-1377.k8s.io-a136e872-ec92-4801-b43f-e5e07033abd1.json.tmp /cdi/dra-1377.k8s.io-a136e872-ec92-4801-b43f-e5e07033abd1.json] stdout="" stderr="" err=<nil> I0314 13:58:53.671159 66283 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-1377.k8s.io/test=claim-a136e872-ec92-4801-b43f-e5e07033abd1],}" < Exit [It] supports simple pod referencing inline resource claim - test/e2e/dra/dra.go:172 @ 03/14/23 13:58:55.668 (4.094s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:58:55.668 Mar 14 13:58:55.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:58:55.674 (5ms) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 13:58:55.674 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 13:58:55.683 STEP: deleting *v1.Pod dra-1377/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 13:58:55.688 I0314 13:58:59.097943 66283 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-1377,ClaimUid:a136e872-ec92-4801-b43f-e5e07033abd1,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-1377.k8s.io-a136e872-ec92-4801-b43f-e5e07033abd1.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 13:58:59.097 Mar 14 13:58:59.097: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:59.099: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:59.099: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-1377/pods/dra-test-driver-4x8n7/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-1377.k8s.io-a136e872-ec92-4801-b43f-e5e07033abd1.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:59.295254 66283 io.go:119] "Command completed" command=[rm -rf /cdi/dra-1377.k8s.io-a136e872-ec92-4801-b43f-e5e07033abd1.json] stdout="" stderr="" err=<nil> I0314 13:58:59.295322 66283 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting *v1alpha2.ResourceClaim dra-1377/tester-1-my-inline-claim - test/e2e/dra/dra.go:796 @ 03/14/23 13:58:59.72 E0314 13:58:59.726427 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:58:59.726 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:58:59.726 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 13:58:59.726 E0314 13:58:59.736913 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:58:59.754054 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:58:59.780633 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:58:59.830401 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:58:59.921159 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:59:00.090986 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:59:00.424224 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:59:01.071839 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:59:02.376803 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:59:04.977757 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:59:10.103678 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:59:20.348643 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" E0314 13:59:40.834103 66283 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1377/tester-1-my-inline-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:58:59Z" finalizers: - dra-1377.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-1377.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"1c12376a-9ab0-436d-a5a0-ab9f404a91fd"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T13:58:51Z" name: tester-1-my-inline-claim namespace: dra-1377 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: 1c12376a-9ab0-436d-a5a0-ab9f404a91fd resourceVersion: "1215" uid: a136e872-ec92-4801-b43f-e5e07033abd1 spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-1377-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-1377.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 13:59:59.729 < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/dra.go:762 @ 03/14/23 13:59:59.729 (1m4.055s) > Enter [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 13:59:59.729 I0314 13:59:59.729476 66283 controller.go:310] "resource controller: Shutting down" driver="dra-1377.k8s.io" E0314 13:59:59.730351 66283 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-1377/dra-test-driver-psxfr" E0314 13:59:59.730440 66283 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-1377/dra-test-driver-4x8n7" E0314 13:59:59.730542 66283 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-1377/dra-test-driver-psxfr" < Exit [DeferCleanup (Each)] cluster - test/e2e/dra/deploy.go:103 @ 03/14/23 13:59:59.732 (3ms) > Enter [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-1377/dra-test-driver | create.go:156 @ 03/14/23 13:59:59.732 < Exit [DeferCleanup (Each)] cluster - deleting *v1.ReplicaSet: dra-1377/dra-test-driver | create.go:156 @ 03/14/23 13:59:59.745 (13ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 13:59:59.745 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 13:59:59.745 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 13:59:59.745 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 13:59:59.745 STEP: Collecting events from namespace "dra-1377". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 13:59:59.745 STEP: Found 26 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 13:59:59.749 Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-psxfr Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-4x8n7 Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-4x8n7: {default-scheduler } Scheduled: Successfully assigned dra-1377/dra-test-driver-4x8n7 to kind-worker2 Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-psxfr: {default-scheduler } Scheduled: Successfully assigned dra-1377/dra-test-driver-psxfr to kind-worker Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-4x8n7: {kubelet kind-worker2} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-psxfr: {kubelet kind-worker} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-4x8n7: {kubelet kind-worker2} Created: Created container registrar Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-4x8n7: {kubelet kind-worker2} Created: Created container plugin Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-4x8n7: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-4x8n7: {kubelet kind-worker2} Started: Started container registrar Mar 14 13:59:59.749: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-4x8n7: {kubelet kind-worker2} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 122.893939ms (1.044127429s including waiting) Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-psxfr: {kubelet kind-worker} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 234.93591ms (1.386336892s including waiting) Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-psxfr: {kubelet kind-worker} Created: Created container registrar Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-4x8n7: {kubelet kind-worker2} Started: Started container plugin Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-psxfr: {kubelet kind-worker} Started: Started container registrar Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-psxfr: {kubelet kind-worker} Started: Started container plugin Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-psxfr: {kubelet kind-worker} Created: Created container plugin Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-psxfr: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-1: {resource_claim } FailedResourceClaimCreation: PodResourceClaim my-inline-claim: resource claim template "tester-1": resourceclaimtemplate.resource.k8s.io "tester-1" not found Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: waiting for dynamic resource controller to create the resourceclaim "tester-1-my-inline-claim". no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-1377/tester-1 to kind-worker2 Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 13:59:59.750: INFO: At 2023-03-14 13:58:59 +0000 UTC - event for tester-1-my-inline-claim: {resource driver dra-1377.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "tester-1-my-inline-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 13:59:59.758: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 13:59:59.758: INFO: dra-test-driver-4x8n7 kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 13:59:59.758: INFO: dra-test-driver-psxfr kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 13:59:59.758: INFO: Mar 14 13:59:59.905: INFO: Logging node info for node kind-control-plane Mar 14 13:59:59.909: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 13:59:59.910: INFO: Logging kubelet events for node kind-control-plane Mar 14 13:59:59.933: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 13:59:59.954: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.954: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 13:59:59.954: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.954: INFO: Container coredns ready: true, restart count 0 Mar 14 13:59:59.954: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.954: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 13:59:59.954: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.954: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 13:59:59.954: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.954: INFO: Container etcd ready: true, restart count 0 Mar 14 13:59:59.954: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.954: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 13:59:59.954: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.954: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 13:59:59.954: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.954: INFO: Container coredns ready: true, restart count 0 Mar 14 13:59:59.954: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.954: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:00:00.158: INFO: Latency metrics for node kind-control-plane Mar 14 14:00:00.158: INFO: Logging node info for node kind-worker Mar 14 14:00:00.166: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:00.166: INFO: Logging kubelet events for node kind-worker Mar 14 14:00:00.183: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:00:00.203: INFO: dra-test-driver-7wgnx started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.203: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.203: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.203: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.203: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:00.203: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.203: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:00.203: INFO: dra-test-driver-6zxqg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.203: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.203: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.203: INFO: dra-test-driver-other-xtlpg started at 2023-03-14 13:58:51 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.203: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.203: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.203: INFO: dra-test-driver-psxfr started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.203: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.203: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.203: INFO: dra-test-driver-4t66h started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.203: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.203: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.203: INFO: dra-test-driver-xqsdd started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.203: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.203: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.203: INFO: dra-test-driver-86zdr started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.203: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.203: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.203: INFO: dra-test-driver-5j9fw started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.203: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.203: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.203: INFO: tester-1 started at 2023-03-14 13:58:55 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.203: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:00:00.360: INFO: Latency metrics for node kind-worker Mar 14 14:00:00.360: INFO: Logging node info for node kind-worker2 Mar 14 14:00:00.368: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:00.369: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:00:00.374: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:00:00.389: INFO: dra-test-driver-fb6kg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.389: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.389: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.389: INFO: dra-test-driver-other-mp779 started at 2023-03-14 13:58:51 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.389: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.389: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.389: INFO: dra-test-driver-n5z8m started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.389: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.389: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.389: INFO: dra-test-driver-f8m4d started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.389: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.389: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.389: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.389: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:00.389: INFO: dra-test-driver-4x8n7 started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.389: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.389: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.389: INFO: dra-test-driver-gjrmj started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.389: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.389: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.389: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.389: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:00.389: INFO: dra-test-driver-7qxsw started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.389: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.389: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.389: INFO: dra-test-driver-lgb7f started at 2023-03-14 13:58:46 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.389: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.389: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.558: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:00:00.558 (812ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:00:00.558 (812ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:00:00.558 STEP: Destroying namespace "dra-1377" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:00:00.558 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:00:00.573 (15ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:00:00.573 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:00:00.573 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\sdriver\ssupports\sclaim\sand\sclass\sparameters$'
[FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:00:04Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:00:14Z" finalizers: - dra-3392.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"8b3ae50b-591d-4052-ad57-f9b471dce106"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:00:04Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-3392.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:00:06Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:00:06Z" name: tester-1-my-inline-claim namespace: dra-3392 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: 8b3ae50b-591d-4052-ad57-f9b471dce106 resourceVersion: "2240" uid: ea23a345-d681-416d-aa82-0e8a33f8220b spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-2 resourceClassName: dra-3392-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"admin_x":"y","user_a":"b"},"NodeName":""}' shareable: true driverName: dra-3392.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:01:14.964from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:00:00.721 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 14:00:00.721 Mar 14 14:00:00.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 14:00:00.722 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 14:00:00.772 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 14:00:00.782 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:00:00.793 (72ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:00:00.793 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:00:00.793 (0s) > Enter [BeforeEach] driver - test/e2e/dra/deploy.go:62 @ 03/14/23 14:00:00.793 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 14:00:00.793 Mar 14 14:00:00.800: INFO: testing on nodes [kind-worker] < Exit [BeforeEach] driver - test/e2e/dra/deploy.go:62 @ 03/14/23 14:00:00.8 (7ms) > Enter [BeforeEach] driver - test/e2e/dra/deploy.go:95 @ 03/14/23 14:00:00.801 STEP: deploying driver on nodes [kind-worker] - test/e2e/dra/deploy.go:130 @ 03/14/23 14:00:00.801 I0314 14:00:00.801375 66264 controller.go:295] "resource controller: Starting" driver="dra-3392.k8s.io" Mar 14 14:00:00.802: INFO: creating *v1.ReplicaSet: dra-3392/dra-test-driver I0314 14:00:02.847507 66264 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" I0314 14:00:02.847537 66264 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 14:00:02.847 I0314 14:00:03.052018 66264 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" requestID=1 request="&InfoRequest{}" I0314 14:00:03.052052 66264 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-3392.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-3392.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:00:03.056328 66264 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:00:03.056385 66264 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] driver - test/e2e/dra/deploy.go:95 @ 03/14/23 14:00:04.848 (4.047s) > Enter [BeforeEach] driver - test/e2e/dra/dra.go:752 @ 03/14/23 14:00:04.848 STEP: creating *v1alpha2.ResourceClass dra-3392-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:04.848 END STEP: creating *v1alpha2.ResourceClass dra-3392-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:04.856 (8ms) < Exit [BeforeEach] driver - test/e2e/dra/dra.go:752 @ 03/14/23 14:00:04.856 (8ms) > Enter [It] supports claim and class parameters - test/e2e/dra/dra.go:153 @ 03/14/23 14:00:04.856 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:04.856 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:04.863 (7ms) STEP: creating *v1.ConfigMap parameters-2 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:04.863 END STEP: creating *v1.ConfigMap parameters-2 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:04.872 (8ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:04.872 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:04.881 (10ms) STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:04.882 END STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:00:04.888 (7ms) I0314 14:00:10.247794 66264 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-3392,ClaimUid:ea23a345-d681-416d-aa82-0e8a33f8220b,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"admin_x\":\"y\",\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-3392.k8s.io-ea23a345-d681-416d-aa82-0e8a33f8220b.json on node kind-worker: {"cdiVersion":"0.3.0","kind":"dra-3392.k8s.io/test","devices":[{"name":"claim-ea23a345-d681-416d-aa82-0e8a33f8220b","containerEdits":{"env":["user_a=b","admin_x=y"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:00:10.247 Mar 14 14:00:10.247: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:10.249: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:10.249: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-3392/pods/dra-test-driver-wfhjf/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-3392.k8s.io-ea23a345-d681-416d-aa82-0e8a33f8220b.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTMzOTIuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZWEyM2EzNDUtZDY4MS00MTZkLWFhODItMGU4YTMzZjgyMjBiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIiwiYWRtaW5feD15Il19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:10.356110 66264 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-3392.k8s.io-ea23a345-d681-416d-aa82-0e8a33f8220b.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTMzOTIuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZWEyM2EzNDUtZDY4MS00MTZkLWFhODItMGU4YTMzZjgyMjBiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIiwiYWRtaW5feD15Il19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 14:00:10.356: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:10.357: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:10.357: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-3392/pods/dra-test-driver-wfhjf/exec?command=mv&command=%2Fcdi%2Fdra-3392.k8s.io-ea23a345-d681-416d-aa82-0e8a33f8220b.json.tmp&command=%2Fcdi%2Fdra-3392.k8s.io-ea23a345-d681-416d-aa82-0e8a33f8220b.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:10.464031 66264 io.go:119] "Command completed" command=[mv /cdi/dra-3392.k8s.io-ea23a345-d681-416d-aa82-0e8a33f8220b.json.tmp /cdi/dra-3392.k8s.io-ea23a345-d681-416d-aa82-0e8a33f8220b.json] stdout="" stderr="" err=<nil> I0314 14:00:10.464105 66264 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-3392.k8s.io/test=claim-ea23a345-d681-416d-aa82-0e8a33f8220b],}" < Exit [It] supports claim and class parameters - test/e2e/dra/dra.go:153 @ 03/14/23 14:00:12.926 (8.07s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:00:12.926 Mar 14 14:00:12.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:00:12.929 (3ms) > Enter [DeferCleanup (Each)] driver - test/e2e/dra/dra.go:762 @ 03/14/23 14:00:12.929 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:00:12.936 STEP: deleting *v1.Pod dra-3392/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:00:12.94 I0314 14:00:14.205828 66264 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-3392,ClaimUid:ea23a345-d681-416d-aa82-0e8a33f8220b,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"admin_x\":\"y\",\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-3392.k8s.io-ea23a345-d681-416d-aa82-0e8a33f8220b.json on node kind-worker - test/e2e/dra/deploy.go:221 @ 03/14/23 14:00:14.205 Mar 14 14:00:14.205: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:14.206: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:14.206: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-3392/pods/dra-test-driver-wfhjf/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-3392.k8s.io-ea23a345-d681-416d-aa82-0e8a33f8220b.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:14.296371 66264 io.go:119] "Command completed" command=[rm -rf /cdi/dra-3392.k8s.io-ea23a345-d681-416d-aa82-0e8a33f8220b.json] stdout="" stderr="" err=<nil> I0314 14:00:14.296426 66264 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" requestID=2 response="&NodeUnprepareResourceResponse{}" E0314 14:00:14.700237 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:14.710146 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:14.732592 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:14.757463 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:14.803071 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:14.888443 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:00:14.962 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:00:14.962 E0314 14:00:15.053552 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:15.381696 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:16.027920 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:17.314284 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:19.879625 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:25.005647 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:35.252576 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" E0314 14:00:55.737619 66264 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-3392/tester-1-my-inline-claim" [FAILED] Timed out after 60.000s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:00:04Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:00:14Z" finalizers: - dra-3392.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"8b3ae50b-591d-4052-ad57-f9b471dce106"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T14:00:04Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-3392.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T14:00:06Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:00:06Z" name: tester-1-my-inline-claim namespace: dra-3392 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: 8b3ae50b-591d-4052-ad57-f9b471dce106 resourceVersion: "2240" uid: ea23a345-d681-416d-aa82-0e8a33f8220b spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-2 resourceClassName: dra-3392-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"admin_x":"y","user_a":"b"},"NodeName":""}' shareable: true driverName: dra-3392.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:01:14.964 < Exit [DeferCleanup (Each)] driver - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:14.964 (1m2.035s) > Enter [DeferCleanup (Each)] driver - test/e2e/dra/deploy.go:103 @ 03/14/23 14:01:14.964 I0314 14:01:14.965713 66264 controller.go:310] "resource controller: Shutting down" driver="dra-3392.k8s.io" E0314 14:01:14.966555 66264 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-3392/dra-test-driver-wfhjf" < Exit [DeferCleanup (Each)] driver - test/e2e/dra/deploy.go:103 @ 03/14/23 14:01:14.967 (3ms) > Enter [DeferCleanup (Each)] driver - deleting *v1.ReplicaSet: dra-3392/dra-test-driver | create.go:156 @ 03/14/23 14:01:14.967 < Exit [DeferCleanup (Each)] driver - deleting *v1.ReplicaSet: dra-3392/dra-test-driver | create.go:156 @ 03/14/23 14:01:15.002 (35ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:01:15.002 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:01:15.002 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:01:15.002 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:01:15.002 STEP: Collecting events from namespace "dra-3392". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:01:15.002 STEP: Found 17 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:01:15.01 Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:00 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-wfhjf Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:00 +0000 UTC - event for dra-test-driver-wfhjf: {default-scheduler } Scheduled: Successfully assigned dra-3392/dra-test-driver-wfhjf to kind-worker Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:01 +0000 UTC - event for dra-test-driver-wfhjf: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:01 +0000 UTC - event for dra-test-driver-wfhjf: {kubelet kind-worker} Created: Created container registrar Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:01 +0000 UTC - event for dra-test-driver-wfhjf: {kubelet kind-worker} Started: Started container registrar Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:01 +0000 UTC - event for dra-test-driver-wfhjf: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:01 +0000 UTC - event for dra-test-driver-wfhjf: {kubelet kind-worker} Created: Created container plugin Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:02 +0000 UTC - event for dra-test-driver-wfhjf: {kubelet kind-worker} Started: Started container plugin Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:04 +0000 UTC - event for tester-1: {resource_claim } FailedResourceClaimCreation: PodResourceClaim my-inline-claim: resource claim template "tester-1": resourceclaimtemplate.resource.k8s.io "tester-1" not found Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:04 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: waiting for dynamic resource controller to create the resourceclaim "tester-1-my-inline-claim". no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:06 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:09 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-3392/tester-1 to kind-worker Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:10 +0000 UTC - event for tester-1: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:10 +0000 UTC - event for tester-1: {kubelet kind-worker} Created: Created container with-resource Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:10 +0000 UTC - event for tester-1: {kubelet kind-worker} Started: Started container with-resource Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:12 +0000 UTC - event for tester-1: {kubelet kind-worker} Killing: Stopping container with-resource Mar 14 14:01:15.010: INFO: At 2023-03-14 14:00:14 +0000 UTC - event for tester-1-my-inline-claim: {resource driver dra-3392.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "tester-1-my-inline-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:01:15.025: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:01:15.025: INFO: dra-test-driver-wfhjf kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:00:00 +0000 UTC }] Mar 14 14:01:15.025: INFO: Mar 14 14:01:15.055: INFO: Logging node info for node kind-control-plane Mar 14 14:01:15.059: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:15.060: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:01:15.070: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:01:15.091: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.091: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:01:15.091: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.091: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:15.091: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.091: INFO: Container coredns ready: true, restart count 0 Mar 14 14:01:15.091: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.091: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:01:15.091: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.091: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:01:15.091: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.091: INFO: Container etcd ready: true, restart count 0 Mar 14 14:01:15.091: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.091: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:15.091: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.091: INFO: Container coredns ready: true, restart count 0 Mar 14 14:01:15.091: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.091: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:01:15.171: INFO: Latency metrics for node kind-control-plane Mar 14 14:01:15.171: INFO: Logging node info for node kind-worker Mar 14 14:01:15.176: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:15.176: INFO: Logging kubelet events for node kind-worker Mar 14 14:01:15.182: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:01:15.194: INFO: dra-test-driver-8jmwc started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.194: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.194: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.194: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.194: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:15.194: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.194: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:15.194: INFO: dra-test-driver-6zxqg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.194: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.194: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.194: INFO: dra-test-driver-twq8l started at 2023-03-14 14:00:03 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.194: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.194: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.194: INFO: dra-test-driver-bvfg8 started at 2023-03-14 14:00:02 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.194: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.194: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.194: INFO: dra-test-driver-b7dnq started at 2023-03-14 14:01:09 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.194: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.194: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.194: INFO: dra-test-driver-wfhjf started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.194: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.194: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.194: INFO: dra-test-driver-t74z8 started at 2023-03-14 14:00:07 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.194: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.194: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.194: INFO: dra-test-driver-t8wgt started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.194: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.194: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.277: INFO: Latency metrics for node kind-worker Mar 14 14:01:15.277: INFO: Logging node info for node kind-worker2 Mar 14 14:01:15.281: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:15.281: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:01:15.288: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:01:15.303: INFO: dra-test-driver-w277j started at 2023-03-14 14:00:02 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.303: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.303: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.303: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.303: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:15.303: INFO: dra-test-driver-c9jh8 started at 2023-03-14 14:00:03 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.303: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.303: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.303: INFO: dra-test-driver-vvn4m started at 2023-03-14 14:01:09 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.303: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.303: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.303: INFO: dra-test-driver-jmtw2 started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.303: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.303: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.303: INFO: dra-test-driver-ss4k7 started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.303: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.303: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.303: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:15.303: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:15.303: INFO: dra-test-driver-v6g2p started at 2023-03-14 14:00:07 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:15.303: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:15.303: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:15.366: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:01:15.366 (364ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:01:15.366 (364ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:01:15.366 STEP: Destroying namespace "dra-3392" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:01:15.366 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:01:15.377 (11ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:01:15.377 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:01:15.377 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\skubelet\smust\snot\srun\sa\spod\sif\sa\sclaim\sis\snot\sreserved\sfor\sit$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:19Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:41Z" finalizers: - dra-1574.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-1574.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T14:01:19Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:19Z" name: external-claim namespace: dra-1574 resourceVersion: "3501" uid: 24714728-cfc2-4c4d-b493-126f5666af7e spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-1574-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-1574.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:41.795from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:15.621 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 14:01:15.621 Mar 14 14:01:15.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 14:01:15.622 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 14:01:15.635 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 14:01:15.64 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 14:01:15.643 (23ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:15.643 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 14:01:15.644 (0s) > Enter [BeforeEach] kubelet - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:15.644 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 14:01:15.644 Mar 14 14:01:15.647: INFO: testing on nodes [kind-worker] < Exit [BeforeEach] kubelet - test/e2e/dra/deploy.go:62 @ 03/14/23 14:01:15.647 (4ms) > Enter [BeforeEach] kubelet - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:15.647 STEP: deploying driver on nodes [kind-worker] - test/e2e/dra/deploy.go:130 @ 03/14/23 14:01:15.647 I0314 14:01:15.648093 66271 controller.go:295] "resource controller: Starting" driver="dra-1574.k8s.io" Mar 14 14:01:15.648: INFO: creating *v1.ReplicaSet: dra-1574/dra-test-driver I0314 14:01:17.669717 66271 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-1574/dra-test-driver-xrvr8" I0314 14:01:17.669743 66271 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-1574/dra-test-driver-xrvr8" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 14:01:17.669 I0314 14:01:17.873993 66271 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-1574/dra-test-driver-xrvr8" requestID=1 request="&InfoRequest{}" I0314 14:01:17.874035 66271 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-1574/dra-test-driver-xrvr8" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-1574.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-1574.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 14:01:17.876595 66271 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-1574/dra-test-driver-xrvr8" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 14:01:17.876639 66271 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-1574/dra-test-driver-xrvr8" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] kubelet - test/e2e/dra/deploy.go:95 @ 03/14/23 14:01:19.67 (4.023s) > Enter [BeforeEach] kubelet - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:19.67 STEP: creating *v1alpha2.ResourceClass dra-1574-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.67 END STEP: creating *v1alpha2.ResourceClass dra-1574-class - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.679 (9ms) < Exit [BeforeEach] kubelet - test/e2e/dra/dra.go:752 @ 03/14/23 14:01:19.679 (9ms) > Enter [It] must not run a pod if a claim is not reserved for it - test/e2e/dra/dra.go:98 @ 03/14/23 14:01:19.679 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.679 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.706 (27ms) STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.706 END STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.72 (15ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.72 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 14:01:19.754 (34ms) < Exit [It] must not run a pod if a claim is not reserved for it - test/e2e/dra/dra.go:98 @ 03/14/23 14:01:39.756 (20.077s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:39.756 Mar 14 14:01:39.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:01:39.76 (4ms) > Enter [DeferCleanup (Each)] kubelet - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:39.76 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:01:39.766 STEP: deleting *v1.Pod dra-1574/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:01:39.77 STEP: deleting *v1alpha2.ResourceClaim dra-1574/external-claim - test/e2e/dra/dra.go:796 @ 03/14/23 14:01:41.786 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:01:41.793 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:01:41.793 E0314 14:01:41.796281 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:41.806518 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:41.821288 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:41.846170 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:41.890702 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:41.975140 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:42.139623 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:42.465229 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:43.110910 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:44.395406 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:46.960019 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:01:52.088423 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:02:02.334557 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" E0314 14:02:22.820095 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-1574/external-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T14:01:19Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:01:41Z" finalizers: - dra-1574.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-1574.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T14:01:19Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:01:19Z" name: external-claim namespace: dra-1574 resourceVersion: "3501" uid: 24714728-cfc2-4c4d-b493-126f5666af7e spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-1574-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-1574.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:02:41.795 < Exit [DeferCleanup (Each)] kubelet - test/e2e/dra/dra.go:762 @ 03/14/23 14:02:41.795 (1m2.035s) > Enter [DeferCleanup (Each)] kubelet - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:41.795 I0314 14:02:41.795717 66271 controller.go:310] "resource controller: Shutting down" driver="dra-1574.k8s.io" E0314 14:02:41.796738 66271 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-1574/dra-test-driver-xrvr8" < Exit [DeferCleanup (Each)] kubelet - test/e2e/dra/deploy.go:103 @ 03/14/23 14:02:41.796 (1ms) > Enter [DeferCleanup (Each)] kubelet - deleting *v1.ReplicaSet: dra-1574/dra-test-driver | create.go:156 @ 03/14/23 14:02:41.796 < Exit [DeferCleanup (Each)] kubelet - deleting *v1.ReplicaSet: dra-1574/dra-test-driver | create.go:156 @ 03/14/23 14:02:41.805 (8ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:41.805 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:02:41.805 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:41.805 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:41.805 STEP: Collecting events from namespace "dra-1574". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:02:41.805 STEP: Found 9 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:02:41.81 Mar 14 14:02:41.810: INFO: At 2023-03-14 14:01:15 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-xrvr8 Mar 14 14:02:41.810: INFO: At 2023-03-14 14:01:15 +0000 UTC - event for dra-test-driver-xrvr8: {default-scheduler } Scheduled: Successfully assigned dra-1574/dra-test-driver-xrvr8 to kind-worker Mar 14 14:02:41.810: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-xrvr8: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:41.810: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-xrvr8: {kubelet kind-worker} Created: Created container registrar Mar 14 14:02:41.810: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-xrvr8: {kubelet kind-worker} Started: Started container registrar Mar 14 14:02:41.810: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-xrvr8: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:02:41.810: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-xrvr8: {kubelet kind-worker} Created: Created container plugin Mar 14 14:02:41.810: INFO: At 2023-03-14 14:01:16 +0000 UTC - event for dra-test-driver-xrvr8: {kubelet kind-worker} Started: Started container plugin Mar 14 14:02:41.810: INFO: At 2023-03-14 14:01:41 +0000 UTC - event for external-claim: {resource driver dra-1574.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "external-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:02:41.813: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:02:41.813: INFO: dra-test-driver-xrvr8 kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 14:01:15 +0000 UTC }] Mar 14 14:02:41.813: INFO: Mar 14 14:02:41.834: INFO: Logging node info for node kind-control-plane Mar 14 14:02:41.837: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:41.838: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:02:41.843: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:02:41.853: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.853: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:41.853: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.853: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:41.853: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.853: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:02:41.853: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.853: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:02:41.853: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.853: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:02:41.853: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.853: INFO: Container etcd ready: true, restart count 0 Mar 14 14:02:41.853: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.853: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:02:41.853: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.853: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:41.853: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.853: INFO: Container coredns ready: true, restart count 0 Mar 14 14:02:41.919: INFO: Latency metrics for node kind-control-plane Mar 14 14:02:41.919: INFO: Logging node info for node kind-worker Mar 14 14:02:41.922: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:41.923: INFO: Logging kubelet events for node kind-worker Mar 14 14:02:41.927: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:02:41.935: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.935: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:41.935: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.935: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:41.935: INFO: dra-test-driver-zg2wf started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:41.935: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:41.935: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:41.935: INFO: dra-test-driver-xrvr8 started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:41.935: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:41.935: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:41.982: INFO: Latency metrics for node kind-worker Mar 14 14:02:41.982: INFO: Logging node info for node kind-worker2 Mar 14 14:02:41.986: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:02:41.986: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:02:41.989: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:02:41.997: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.997: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:02:41.997: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:02:41.997: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:02:41.997: INFO: dra-test-driver-9f5vg started at 2023-03-14 14:01:29 +0000 UTC (0+2 container statuses recorded) Mar 14 14:02:41.997: INFO: Container plugin ready: true, restart count 0 Mar 14 14:02:41.997: INFO: Container registrar ready: true, restart count 0 Mar 14 14:02:42.039: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:02:42.039 (234ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:02:42.039 (234ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:42.039 STEP: Destroying namespace "dra-1574" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:02:42.039 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:02:42.047 (8ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:42.047 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:02:42.047 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\skubelet\smust\sretry\sNodePrepareResource$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:00:26Z" finalizers: - dra-2586.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"fab7711e-e4f5-400d-bddd-a78951715481"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2586.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T13:58:52Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:52Z" name: tester-1-my-inline-claim namespace: dra-2586 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: fab7711e-e4f5-400d-bddd-a78951715481 resourceVersion: "2398" uid: aaca85d9-51cb-4694-b195-0e17d93e54e5 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2586-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2586.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:01:27.948from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.351 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 13:58:45.352 Mar 14 13:58:45.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 13:58:45.353 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 13:58:45.425 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 13:58:45.435 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.441 (90ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.441 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.442 (0s) > Enter [BeforeEach] kubelet - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.442 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 13:58:45.442 Mar 14 13:58:45.458: INFO: testing on nodes [kind-worker] < Exit [BeforeEach] kubelet - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.458 (16ms) > Enter [BeforeEach] kubelet - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:45.458 STEP: deploying driver on nodes [kind-worker] - test/e2e/dra/deploy.go:130 @ 03/14/23 13:58:45.458 I0314 13:58:45.458814 66269 controller.go:295] "resource controller: Starting" driver="dra-2586.k8s.io" Mar 14 13:58:45.462: INFO: creating *v1.ReplicaSet: dra-2586/dra-test-driver I0314 13:58:49.544638 66269 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" I0314 13:58:49.544675 66269 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 13:58:49.544 I0314 13:58:50.010149 66269 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" requestID=1 request="&InfoRequest{}" I0314 13:58:50.010239 66269 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-2586.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-2586.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:50.013575 66269 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:50.013605 66269 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] kubelet - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:51.544 (6.087s) > Enter [BeforeEach] kubelet - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.544 STEP: creating *v1alpha2.ResourceClass dra-2586-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.544 END STEP: creating *v1alpha2.ResourceClass dra-2586-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.554 (10ms) < Exit [BeforeEach] kubelet - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.554 (10ms) > Enter [It] must retry NodePrepareResource - test/e2e/dra/dra.go:69 @ 03/14/23 13:58:51.554 STEP: waiting for container startup to fail - test/e2e/dra/dra.go:75 @ 03/14/23 13:58:51.554 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.554 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.566 (12ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.566 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.58 (14ms) STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.58 END STEP: creating *v1alpha2.ResourceClaimTemplate tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.603 (23ms) STEP: wait for NodePrepareResource call - test/e2e/dra/dra.go:81 @ 03/14/23 13:58:51.603 I0314 13:58:56.183106 66269 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-2586,ClaimUid:aaca85d9-51cb-4694-b195-0e17d93e54e5,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" E0314 13:58:56.183147 66269 nonblockinggrpcserver.go:127] "kubelet plugin/dra: handling request failed" err="injected error" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-2586,ClaimUid:aaca85d9-51cb-4694-b195-0e17d93e54e5,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: allowing container startup to succeed - test/e2e/dra/dra.go:89 @ 03/14/23 13:58:57.605 I0314 14:00:22.700836 66269 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" requestID=2 request="&NodePrepareResourceRequest{Namespace:dra-2586,ClaimUid:aaca85d9-51cb-4694-b195-0e17d93e54e5,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-2586.k8s.io-aaca85d9-51cb-4694-b195-0e17d93e54e5.json on node kind-worker: {"cdiVersion":"0.3.0","kind":"dra-2586.k8s.io/test","devices":[{"name":"claim-aaca85d9-51cb-4694-b195-0e17d93e54e5","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 14:00:22.7 Mar 14 14:00:22.700: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:22.701: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:22.701: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2586/pods/dra-test-driver-6zxqg/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-2586.k8s.io-aaca85d9-51cb-4694-b195-0e17d93e54e5.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI1ODYuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tYWFjYTg1ZDktNTFjYi00Njk0LWIxOTUtMGUxN2Q5M2U1NGU1IiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:22.797376 66269 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-2586.k8s.io-aaca85d9-51cb-4694-b195-0e17d93e54e5.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTI1ODYuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tYWFjYTg1ZDktNTFjYi00Njk0LWIxOTUtMGUxN2Q5M2U1NGU1IiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 14:00:22.797: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:22.798: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:22.798: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2586/pods/dra-test-driver-6zxqg/exec?command=mv&command=%2Fcdi%2Fdra-2586.k8s.io-aaca85d9-51cb-4694-b195-0e17d93e54e5.json.tmp&command=%2Fcdi%2Fdra-2586.k8s.io-aaca85d9-51cb-4694-b195-0e17d93e54e5.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:22.896462 66269 io.go:119] "Command completed" command=[mv /cdi/dra-2586.k8s.io-aaca85d9-51cb-4694-b195-0e17d93e54e5.json.tmp /cdi/dra-2586.k8s.io-aaca85d9-51cb-4694-b195-0e17d93e54e5.json] stdout="" stderr="" err=<nil> I0314 14:00:22.896550 66269 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" requestID=2 response="&NodePrepareResourceResponse{CdiDevices:[dra-2586.k8s.io/test=claim-aaca85d9-51cb-4694-b195-0e17d93e54e5],}" < Exit [It] must retry NodePrepareResource - test/e2e/dra/dra.go:69 @ 03/14/23 14:00:23.907 (1m32.353s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:00:23.907 Mar 14 14:00:23.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 14:00:23.911 (4ms) > Enter [DeferCleanup (Each)] kubelet - test/e2e/dra/dra.go:762 @ 03/14/23 14:00:23.911 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:00:23.917 STEP: deleting *v1.Pod dra-2586/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 14:00:23.921 I0314 14:00:25.942488 66269 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" requestID=3 request="&NodeUnprepareResourceRequest{Namespace:dra-2586,ClaimUid:aaca85d9-51cb-4694-b195-0e17d93e54e5,ClaimName:tester-1-my-inline-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-2586.k8s.io-aaca85d9-51cb-4694-b195-0e17d93e54e5.json on node kind-worker - test/e2e/dra/deploy.go:221 @ 03/14/23 14:00:25.942 Mar 14 14:00:25.942: INFO: >>> kubeConfig: /root/.kube/config Mar 14 14:00:25.943: INFO: ExecWithOptions: Clientset creation Mar 14 14:00:25.943: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-2586/pods/dra-test-driver-6zxqg/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-2586.k8s.io-aaca85d9-51cb-4694-b195-0e17d93e54e5.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 14:00:26.035894 66269 io.go:119] "Command completed" command=[rm -rf /cdi/dra-2586.k8s.io-aaca85d9-51cb-4694-b195-0e17d93e54e5.json] stdout="" stderr="" err=<nil> I0314 14:00:26.035975 66269 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" requestID=3 response="&NodeUnprepareResourceResponse{}" E0314 14:00:26.772754 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:26.782720 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:26.799072 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:26.824692 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:26.870807 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:26.956290 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:27.121954 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:27.447168 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:00:27.945 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:00:27.945 E0314 14:00:28.092538 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:29.377385 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:31.943051 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:37.069052 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:00:47.314349 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" E0314 14:01:07.800048 66269 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"tester-1-my-inline-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-2586/tester-1-my-inline-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:00:26Z" finalizers: - dra-2586.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"fab7711e-e4f5-400d-bddd-a78951715481"}: {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: kube-controller-manager operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-2586.k8s.io/deletion-protection": {} manager: e2e.test operation: Update time: "2023-03-14T13:58:52Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:52Z" name: tester-1-my-inline-claim namespace: dra-2586 ownerReferences: - apiVersion: v1 blockOwnerDeletion: true controller: true kind: Pod name: tester-1 uid: fab7711e-e4f5-400d-bddd-a78951715481 resourceVersion: "2398" uid: aaca85d9-51cb-4694-b195-0e17d93e54e5 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-2586-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-2586.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:01:27.948 < Exit [DeferCleanup (Each)] kubelet - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:27.948 (1m4.037s) > Enter [DeferCleanup (Each)] kubelet - test/e2e/dra/deploy.go:103 @ 03/14/23 14:01:27.948 I0314 14:01:27.949593 66269 controller.go:310] "resource controller: Shutting down" driver="dra-2586.k8s.io" E0314 14:01:27.950481 66269 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-2586/dra-test-driver-6zxqg" < Exit [DeferCleanup (Each)] kubelet - test/e2e/dra/deploy.go:103 @ 03/14/23 14:01:27.951 (4ms) > Enter [DeferCleanup (Each)] kubelet - deleting *v1.ReplicaSet: dra-2586/dra-test-driver | create.go:156 @ 03/14/23 14:01:27.951 < Exit [DeferCleanup (Each)] kubelet - deleting *v1.ReplicaSet: dra-2586/dra-test-driver | create.go:156 @ 03/14/23 14:01:27.97 (19ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:01:27.97 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:01:27.971 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:01:27.971 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:01:27.971 STEP: Collecting events from namespace "dra-2586". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:01:27.971 STEP: Found 18 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:01:27.976 Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-6zxqg Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-6zxqg: {default-scheduler } Scheduled: Successfully assigned dra-2586/dra-test-driver-6zxqg to kind-worker Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-6zxqg: {kubelet kind-worker} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-6zxqg: {kubelet kind-worker} Started: Started container registrar Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-6zxqg: {kubelet kind-worker} Created: Created container registrar Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-6zxqg: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-6zxqg: {kubelet kind-worker} Created: Created container plugin Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-6zxqg: {kubelet kind-worker} Started: Started container plugin Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-6zxqg: {kubelet kind-worker} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 124.138535ms (923.737746ms including waiting) Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-1: {resource_claim } FailedResourceClaimCreation: PodResourceClaim my-inline-claim: resource claim template "tester-1": resourceclaimtemplate.resource.k8s.io "tester-1" not found Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: waiting for dynamic resource controller to create the resourceclaim "tester-1-my-inline-claim". no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 14:01:27.976: INFO: At 2023-03-14 13:58:55 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-2586/tester-1 to kind-worker Mar 14 14:01:27.976: INFO: At 2023-03-14 14:00:23 +0000 UTC - event for tester-1: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:01:27.976: INFO: At 2023-03-14 14:00:23 +0000 UTC - event for tester-1: {kubelet kind-worker} Created: Created container with-resource Mar 14 14:01:27.976: INFO: At 2023-03-14 14:00:23 +0000 UTC - event for tester-1: {kubelet kind-worker} Started: Started container with-resource Mar 14 14:01:27.976: INFO: At 2023-03-14 14:00:24 +0000 UTC - event for tester-1: {kubelet kind-worker} Killing: Stopping container with-resource Mar 14 14:01:27.976: INFO: At 2023-03-14 14:00:26 +0000 UTC - event for tester-1-my-inline-claim: {resource driver dra-2586.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "tester-1-my-inline-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:01:27.985: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:01:27.985: INFO: dra-test-driver-6zxqg kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 14:01:27.985: INFO: Mar 14 14:01:28.018: INFO: Logging node info for node kind-control-plane Mar 14 14:01:28.025: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:28.025: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:01:28.035: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:01:28.045: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.045: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:28.045: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.045: INFO: Container coredns ready: true, restart count 0 Mar 14 14:01:28.045: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.045: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:01:28.045: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.045: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:01:28.045: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.045: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:01:28.045: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.045: INFO: Container etcd ready: true, restart count 0 Mar 14 14:01:28.045: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.045: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:01:28.045: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.045: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:28.045: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.045: INFO: Container coredns ready: true, restart count 0 Mar 14 14:01:28.142: INFO: Latency metrics for node kind-control-plane Mar 14 14:01:28.142: INFO: Logging node info for node kind-worker Mar 14 14:01:28.147: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:28.147: INFO: Logging kubelet events for node kind-worker Mar 14 14:01:28.158: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:01:28.171: INFO: dra-test-driver-8jmwc started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.171: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.171: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.171: INFO: tester-1 started at 2023-03-14 14:01:19 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.171: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:01:28.171: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.171: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:28.171: INFO: dra-test-driver-6zxqg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.171: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.171: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.171: INFO: dra-test-driver-zrgsx started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.171: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.171: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.171: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.171: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:28.171: INFO: dra-test-driver-b7dnq started at 2023-03-14 14:01:09 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.171: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.171: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.171: INFO: dra-test-driver-xrvr8 started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.171: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.171: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.171: INFO: dra-test-driver-dx7tb started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.171: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.171: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.171: INFO: dra-test-driver-9bgm4 started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.171: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.171: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.171: INFO: dra-test-driver-wsdzm started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.171: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.171: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.256: INFO: Latency metrics for node kind-worker Mar 14 14:01:28.256: INFO: Logging node info for node kind-worker2 Mar 14 14:01:28.261: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:28.261: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:01:28.269: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:01:28.280: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.280: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:28.280: INFO: dra-test-driver-h6qql started at 2023-03-14 14:01:23 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.280: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.280: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.280: INFO: dra-test-driver-ss4k7 started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.280: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.280: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.280: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.280: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:28.280: INFO: dra-test-driver-nvf7d started at 2023-03-14 14:01:15 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.280: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.280: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.280: INFO: dra-test-driver-lgdqg started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.280: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.280: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.280: INFO: dra-test-driver-vvn4m started at 2023-03-14 14:01:09 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.280: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.280: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.280: INFO: tester-1 started at 2023-03-14 14:01:27 +0000 UTC (1+1 container statuses recorded) Mar 14 14:01:28.280: INFO: Init container with-resource-init ready: false, restart count 0 Mar 14 14:01:28.280: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:01:28.280: INFO: dra-test-driver-w6qnj started at 2023-03-14 14:01:20 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:28.280: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:28.280: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:28.280: INFO: tester-1 started at 2023-03-14 14:01:27 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:28.280: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:01:28.360: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:01:28.36 (390ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:01:28.36 (390ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:01:28.36 STEP: Destroying namespace "dra-2586" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:01:28.36 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:01:28.368 (8ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:01:28.369 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:01:28.369 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\skubelet\smust\sunprepare\sresources\sfor\sforce\-deleted\spod$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:58:59Z" finalizers: - dra-9296.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-9296.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:51Z" name: external-claim namespace: dra-9296 resourceVersion: "1214" uid: e8c387ca-d379-475c-8a30-8ee7c0c30dc8 spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-9296-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-9296.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 13:59:59.722from junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.337 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 13:58:45.338 Mar 14 13:58:45.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 13:58:45.339 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 13:58:45.416 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 13:58:45.423 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.432 (95ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.432 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.432 (0s) > Enter [BeforeEach] kubelet - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.432 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 13:58:45.433 Mar 14 13:58:45.447: INFO: testing on nodes [kind-worker2] < Exit [BeforeEach] kubelet - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.447 (14ms) > Enter [BeforeEach] kubelet - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:45.447 STEP: deploying driver on nodes [kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 13:58:45.447 I0314 13:58:45.447734 66271 controller.go:295] "resource controller: Starting" driver="dra-9296.k8s.io" Mar 14 13:58:45.451: INFO: creating *v1.ReplicaSet: dra-9296/dra-test-driver I0314 13:58:49.544637 66271 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" I0314 13:58:49.544668 66271 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 13:58:49.544 I0314 13:58:49.957994 66271 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" requestID=1 request="&InfoRequest{}" I0314 13:58:49.958043 66271 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-9296.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-9296.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:49.975975 66271 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:49.976120 66271 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] kubelet - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:51.545 (6.098s) > Enter [BeforeEach] kubelet - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.545 STEP: creating *v1alpha2.ResourceClass dra-9296-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.545 END STEP: creating *v1alpha2.ResourceClass dra-9296-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.554 (9ms) < Exit [BeforeEach] kubelet - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.554 (9ms) > Enter [It] must unprepare resources for force-deleted pod - test/e2e/dra/dra.go:121 @ 03/14/23 13:58:51.554 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.554 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.571 (17ms) STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.571 END STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.608 (37ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.608 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.622 (14ms) I0314 13:58:53.285077 66271 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-9296,ClaimUid:e8c387ca-d379-475c-8a30-8ee7c0c30dc8,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: creating CDI file /cdi/dra-9296.k8s.io-e8c387ca-d379-475c-8a30-8ee7c0c30dc8.json on node kind-worker2: {"cdiVersion":"0.3.0","kind":"dra-9296.k8s.io/test","devices":[{"name":"claim-e8c387ca-d379-475c-8a30-8ee7c0c30dc8","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 13:58:53.285 Mar 14 13:58:53.285: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:53.286: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:53.286: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-9296/pods/dra-test-driver-n5z8m/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-9296.k8s.io-e8c387ca-d379-475c-8a30-8ee7c0c30dc8.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTkyOTYuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZThjMzg3Y2EtZDM3OS00NzVjLThhMzAtOGVlN2MwYzMwZGM4IiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:53.454724 66271 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-9296.k8s.io-e8c387ca-d379-475c-8a30-8ee7c0c30dc8.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTkyOTYuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZThjMzg3Y2EtZDM3OS00NzVjLThhMzAtOGVlN2MwYzMwZGM4IiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 13:58:53.454: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:53.455: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:53.455: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-9296/pods/dra-test-driver-n5z8m/exec?command=mv&command=%2Fcdi%2Fdra-9296.k8s.io-e8c387ca-d379-475c-8a30-8ee7c0c30dc8.json.tmp&command=%2Fcdi%2Fdra-9296.k8s.io-e8c387ca-d379-475c-8a30-8ee7c0c30dc8.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:53.679323 66271 io.go:119] "Command completed" command=[mv /cdi/dra-9296.k8s.io-e8c387ca-d379-475c-8a30-8ee7c0c30dc8.json.tmp /cdi/dra-9296.k8s.io-e8c387ca-d379-475c-8a30-8ee7c0c30dc8.json] stdout="" stderr="" err=<nil> I0314 13:58:53.679412 66271 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-9296.k8s.io/test=claim-e8c387ca-d379-475c-8a30-8ee7c0c30dc8],}" STEP: force delete test pod tester-1 - test/e2e/dra/dra.go:132 @ 03/14/23 13:58:55.665 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:139 @ 03/14/23 13:58:55.679 I0314 13:58:59.094258 66271 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-9296,ClaimUid:e8c387ca-d379-475c-8a30-8ee7c0c30dc8,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"\"},}" STEP: deleting CDI file /cdi/dra-9296.k8s.io-e8c387ca-d379-475c-8a30-8ee7c0c30dc8.json on node kind-worker2 - test/e2e/dra/deploy.go:221 @ 03/14/23 13:58:59.094 Mar 14 13:58:59.094: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:58:59.095: INFO: ExecWithOptions: Clientset creation Mar 14 13:58:59.095: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-9296/pods/dra-test-driver-n5z8m/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-9296.k8s.io-e8c387ca-d379-475c-8a30-8ee7c0c30dc8.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:58:59.295096 66271 io.go:119] "Command completed" command=[rm -rf /cdi/dra-9296.k8s.io-e8c387ca-d379-475c-8a30-8ee7c0c30dc8.json] stdout="" stderr="" err=<nil> I0314 13:58:59.295150 66271 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" requestID=2 response="&NodeUnprepareResourceResponse{}" < Exit [It] must unprepare resources for force-deleted pod - test/e2e/dra/dra.go:121 @ 03/14/23 13:58:59.681 (8.126s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:58:59.681 Mar 14 13:58:59.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:58:59.684 (3ms) > Enter [DeferCleanup (Each)] kubelet - test/e2e/dra/dra.go:762 @ 03/14/23 13:58:59.684 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 13:58:59.695 STEP: deleting *v1alpha2.ResourceClaim dra-9296/external-claim - test/e2e/dra/dra.go:796 @ 03/14/23 13:58:59.708 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:58:59.72 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 13:58:59.72 E0314 13:58:59.725081 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:58:59.733798 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:58:59.749254 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:58:59.775128 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:58:59.828095 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:58:59.916381 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:59:00.088772 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:59:00.416927 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:59:01.067651 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:59:02.372224 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:59:04.968697 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:59:10.093620 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:59:20.338990 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" E0314 13:59:40.824477 66271 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-9296/external-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:1, cap:1>: - metadata: creationTimestamp: "2023-03-14T13:58:51Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:58:59Z" finalizers: - dra-9296.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-9296.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:51Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:shareable: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:51Z" name: external-claim namespace: dra-9296 resourceVersion: "1214" uid: e8c387ca-d379-475c-8a30-8ee7c0c30dc8 spec: allocationMode: Immediate parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-9296-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker2 context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":""}' shareable: true driverName: dra-9296.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 13:59:59.722 < Exit [DeferCleanup (Each)] kubelet - test/e2e/dra/dra.go:762 @ 03/14/23 13:59:59.722 (1m0.038s) > Enter [DeferCleanup (Each)] kubelet - test/e2e/dra/deploy.go:103 @ 03/14/23 13:59:59.722 I0314 13:59:59.722928 66271 controller.go:310] "resource controller: Shutting down" driver="dra-9296.k8s.io" E0314 13:59:59.723760 66271 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-9296/dra-test-driver-n5z8m" < Exit [DeferCleanup (Each)] kubelet - test/e2e/dra/deploy.go:103 @ 03/14/23 13:59:59.723 (1ms) > Enter [DeferCleanup (Each)] kubelet - deleting *v1.ReplicaSet: dra-9296/dra-test-driver | create.go:156 @ 03/14/23 13:59:59.723 < Exit [DeferCleanup (Each)] kubelet - deleting *v1.ReplicaSet: dra-9296/dra-test-driver | create.go:156 @ 03/14/23 13:59:59.74 (17ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 13:59:59.74 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 13:59:59.74 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 13:59:59.74 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 13:59:59.74 STEP: Collecting events from namespace "dra-9296". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 13:59:59.74 STEP: Found 16 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 13:59:59.746 Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-n5z8m Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-n5z8m: {default-scheduler } Scheduled: Successfully assigned dra-9296/dra-test-driver-n5z8m to kind-worker2 Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-n5z8m: {kubelet kind-worker2} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-n5z8m: {kubelet kind-worker2} Started: Started container registrar Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-n5z8m: {kubelet kind-worker2} Created: Created container registrar Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-n5z8m: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-n5z8m: {kubelet kind-worker2} Created: Created container plugin Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-n5z8m: {kubelet kind-worker2} Started: Started container plugin Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-n5z8m: {kubelet kind-worker2} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 123.922019ms (994.258545ms including waiting) Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: 0/3 nodes are available: unallocated immediate resourceclaim. no new claims to deallocate, preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.. Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-9296/tester-1 to kind-worker2 Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Created: Created container with-resource Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:54 +0000 UTC - event for tester-1: {kubelet kind-worker2} Started: Started container with-resource Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:57 +0000 UTC - event for tester-1: {kubelet kind-worker2} Killing: Stopping container with-resource Mar 14 13:59:59.746: INFO: At 2023-03-14 13:58:59 +0000 UTC - event for external-claim: {resource driver dra-9296.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "external-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 13:59:59.750: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 13:59:59.750: INFO: dra-test-driver-n5z8m kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 13:59:59.750: INFO: Mar 14 13:59:59.819: INFO: Logging node info for node kind-control-plane Mar 14 13:59:59.823: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 13:59:59.824: INFO: Logging kubelet events for node kind-control-plane Mar 14 13:59:59.830: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 13:59:59.867: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.867: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 13:59:59.867: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.867: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 13:59:59.867: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.867: INFO: Container etcd ready: true, restart count 0 Mar 14 13:59:59.867: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.867: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 13:59:59.867: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.867: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 13:59:59.867: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.867: INFO: Container coredns ready: true, restart count 0 Mar 14 13:59:59.867: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.867: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 13:59:59.867: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.867: INFO: Container coredns ready: true, restart count 0 Mar 14 13:59:59.867: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 13:59:59.867: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:00:00.061: INFO: Latency metrics for node kind-control-plane Mar 14 14:00:00.061: INFO: Logging node info for node kind-worker Mar 14 14:00:00.070: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:00.070: INFO: Logging kubelet events for node kind-worker Mar 14 14:00:00.080: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:00:00.101: INFO: dra-test-driver-psxfr started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.101: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.101: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.101: INFO: dra-test-driver-4t66h started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.101: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.101: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.101: INFO: dra-test-driver-xqsdd started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.101: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.101: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.101: INFO: dra-test-driver-86zdr started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.101: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.101: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.101: INFO: dra-test-driver-5j9fw started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.101: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.101: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.101: INFO: tester-1 started at 2023-03-14 13:58:55 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.101: INFO: Container with-resource ready: false, restart count 0 Mar 14 14:00:00.101: INFO: dra-test-driver-7wgnx started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.101: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.101: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.101: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.101: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:00.101: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.101: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:00.101: INFO: dra-test-driver-6zxqg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.101: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.101: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.101: INFO: dra-test-driver-other-xtlpg started at 2023-03-14 13:58:51 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.101: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.101: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.304: INFO: Latency metrics for node kind-worker Mar 14 14:00:00.304: INFO: Logging node info for node kind-worker2 Mar 14 14:00:00.308: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:00:00.308: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:00:00.317: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:00:00.329: INFO: dra-test-driver-n5z8m started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.329: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.329: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.329: INFO: dra-test-driver-f8m4d started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.329: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.329: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.329: INFO: dra-test-driver-gjrmj started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.329: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.329: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.329: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.329: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:00:00.329: INFO: dra-test-driver-4x8n7 started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.329: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.329: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.329: INFO: dra-test-driver-lgb7f started at 2023-03-14 13:58:46 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.329: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.329: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.329: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:00:00.329: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:00:00.329: INFO: dra-test-driver-7qxsw started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.329: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.329: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.329: INFO: dra-test-driver-fb6kg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.329: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.329: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.329: INFO: dra-test-driver-other-mp779 started at 2023-03-14 13:58:51 +0000 UTC (0+2 container statuses recorded) Mar 14 14:00:00.329: INFO: Container plugin ready: true, restart count 0 Mar 14 14:00:00.329: INFO: Container registrar ready: true, restart count 0 Mar 14 14:00:00.478: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:00:00.478 (738ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:00:00.478 (738ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:00:00.478 STEP: Destroying namespace "dra-9296" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:00:00.478 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:00:00.488 (10ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:00:00.488 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:00:00.488 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\smultiple\sdrivers\swork$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:2, cap:2>: - metadata: creationTimestamp: "2023-03-14T13:58:55Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:59:08Z" finalizers: - dra-5718.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-5718.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:56Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:56Z" name: external-claim namespace: dra-5718 resourceVersion: "1327" uid: 5a51a9fa-8583-4096-bf6d-7dbcd4515991 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-5718-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":"kind-worker"}' driverName: dra-5718.k8s.io - metadata: creationTimestamp: "2023-03-14T13:58:55Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:59:08Z" finalizers: - dra-5718-other.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-5718-other.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:56Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:56Z" name: external-claim-other namespace: dra-5718 resourceVersion: "1328" uid: d704f95c-8acc-4b0d-872c-b1b7a762c28b spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-other-1 resourceClassName: dra-5718-other-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":"kind-worker"}' driverName: dra-5718-other.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:00:08.094 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.332 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:211 @ 03/14/23 13:58:45.332 Mar 14 13:58:45.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dra - test/e2e/framework/framework.go:250 @ 03/14/23 13:58:45.333 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 03/14/23 13:58:45.452 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:262 @ 03/14/23 13:58:45.462 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - set up framework | framework.go:191 @ 03/14/23 13:58:45.467 (135ms) > Enter [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.467 < Exit [BeforeEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:33 @ 03/14/23 13:58:45.467 (0s) > Enter [BeforeEach] multiple drivers - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.467 STEP: selecting nodes - test/e2e/dra/deploy.go:63 @ 03/14/23 13:58:45.467 Mar 14 13:58:45.512: INFO: testing on nodes [kind-worker kind-worker2] < Exit [BeforeEach] multiple drivers - test/e2e/dra/deploy.go:62 @ 03/14/23 13:58:45.512 (45ms) > Enter [BeforeEach] multiple drivers - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:45.512 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 13:58:45.512 Mar 14 13:58:45.515: INFO: creating *v1.ReplicaSet: dra-5718/dra-test-driver I0314 13:58:45.516327 66267 controller.go:295] "resource controller: Starting" driver="dra-5718.k8s.io" I0314 13:58:49.587534 66267 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" I0314 13:58:49.587598 66267 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" I0314 13:58:49.589005 66267 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-5718/dra-test-driver-f8m4d" I0314 13:58:49.589027 66267 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-5718/dra-test-driver-f8m4d" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 13:58:49.589 I0314 13:58:49.990344 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-5718/dra-test-driver-f8m4d" requestID=1 request="&InfoRequest{}" I0314 13:58:49.990386 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-5718/dra-test-driver-f8m4d" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-5718.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-5718.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:50.025634 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-5718/dra-test-driver-f8m4d" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:50.025666 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-5718/dra-test-driver-f8m4d" requestID=2 response="&RegistrationStatusResponse{}" I0314 13:58:50.116461 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" requestID=1 request="&InfoRequest{}" I0314 13:58:50.116505 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-5718.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-5718.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:50.150783 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:50.150853 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] multiple drivers - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:51.589 (6.078s) > Enter [BeforeEach] multiple drivers - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.589 STEP: creating *v1alpha2.ResourceClass dra-5718-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.589 END STEP: creating *v1alpha2.ResourceClass dra-5718-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:51.617 (27ms) < Exit [BeforeEach] multiple drivers - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:51.617 (27ms) > Enter [BeforeEach] multiple drivers - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:51.617 STEP: deploying driver on nodes [kind-worker kind-worker2] - test/e2e/dra/deploy.go:130 @ 03/14/23 13:58:51.617 I0314 13:58:51.618069 66267 controller.go:295] "instance-other/resource controller: Starting" driver="dra-5718-other.k8s.io" Mar 14 13:58:51.620: INFO: creating *v1.ReplicaSet: dra-5718/dra-test-driver-other I0314 13:58:53.671735 66267 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker2" pod="dra-5718/dra-test-driver-other-mp779" I0314 13:58:53.671771 66267 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker2" pod="dra-5718/dra-test-driver-other-mp779" I0314 13:58:53.673262 66267 nonblockinggrpcserver.go:107] "kubelet plugin/dra: GRPC server started" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" I0314 13:58:53.673290 66267 nonblockinggrpcserver.go:107] "kubelet plugin/registrar: GRPC server started" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" STEP: wait for plugin registration - test/e2e/dra/deploy.go:242 @ 03/14/23 13:58:53.673 I0314 13:58:53.882814 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-5718/dra-test-driver-other-mp779" requestID=1 request="&InfoRequest{}" I0314 13:58:53.882857 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-5718/dra-test-driver-other-mp779" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-5718-other.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-5718-other.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:53.890730 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker2" pod="dra-5718/dra-test-driver-other-mp779" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:53.890801 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker2" pod="dra-5718/dra-test-driver-other-mp779" requestID=2 response="&RegistrationStatusResponse{}" I0314 13:58:54.001279 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" requestID=1 request="&InfoRequest{}" I0314 13:58:54.001324 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" requestID=1 response="&PluginInfo{Type:DRAPlugin,Name:dra-5718-other.k8s.io,Endpoint:/var/lib/kubelet/plugins/dra-5718-other.k8s.io.sock,SupportedVersions:[1.0.0],}" I0314 13:58:54.006315 66267 nonblockinggrpcserver.go:118] "kubelet plugin/registrar: handling request" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" requestID=2 request="&RegistrationStatus{PluginRegistered:true,Error:,}" I0314 13:58:54.006348 66267 nonblockinggrpcserver.go:129] "kubelet plugin/registrar: handling request succeeded" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" requestID=2 response="&RegistrationStatusResponse{}" < Exit [BeforeEach] multiple drivers - test/e2e/dra/deploy.go:95 @ 03/14/23 13:58:55.674 (4.057s) > Enter [BeforeEach] multiple drivers - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:55.674 STEP: creating *v1alpha2.ResourceClass dra-5718-other-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.674 END STEP: creating *v1alpha2.ResourceClass dra-5718-other-class - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.678 (5ms) < Exit [BeforeEach] multiple drivers - test/e2e/dra/dra.go:752 @ 03/14/23 13:58:55.678 (5ms) > Enter [It] work - test/e2e/dra/dra.go:476 @ 03/14/23 13:58:55.678 STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.678 END STEP: creating *v1.ConfigMap parameters-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.684 (6ms) STEP: creating *v1.ConfigMap parameters-other-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.684 END STEP: creating *v1.ConfigMap parameters-other-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.688 (4ms) STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.688 END STEP: creating *v1alpha2.ResourceClaim external-claim - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.694 (5ms) STEP: creating *v1alpha2.ResourceClaim external-claim-other - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.694 END STEP: creating *v1alpha2.ResourceClaim external-claim-other - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.7 (6ms) STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.7 END STEP: creating *v1.Pod tester-1 - test/e2e/dra/dra.go:706 @ 03/14/23 13:58:55.704 (4ms) E0314 13:58:55.736779 66267 controller.go:345] "resource controller: processing failed" err="update unsuitable node status: Operation cannot be fulfilled on podschedulings.resource.k8s.io \"tester-1\": the object has been modified; please apply your changes to the latest version and try again" key="podscheduling:dra-5718/tester-1" I0314 13:59:00.209719 66267 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-5718,ClaimUid:5a51a9fa-8583-4096-bf6d-7dbcd4515991,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"kind-worker\"},}" STEP: creating CDI file /cdi/dra-5718.k8s.io-5a51a9fa-8583-4096-bf6d-7dbcd4515991.json on node kind-worker: {"cdiVersion":"0.3.0","kind":"dra-5718.k8s.io/test","devices":[{"name":"claim-5a51a9fa-8583-4096-bf6d-7dbcd4515991","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 13:59:00.21 Mar 14 13:59:00.210: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:59:00.212: INFO: ExecWithOptions: Clientset creation Mar 14 13:59:00.212: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-5718/pods/dra-test-driver-4t66h/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-5718.k8s.io-5a51a9fa-8583-4096-bf6d-7dbcd4515991.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTU3MTguazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tNWE1MWE5ZmEtODU4My00MDk2LWJmNmQtN2RiY2Q0NTE1OTkxIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:59:00.350146 66267 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-5718.k8s.io-5a51a9fa-8583-4096-bf6d-7dbcd4515991.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTU3MTguazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tNWE1MWE5ZmEtODU4My00MDk2LWJmNmQtN2RiY2Q0NTE1OTkxIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 13:59:00.350: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:59:00.351: INFO: ExecWithOptions: Clientset creation Mar 14 13:59:00.351: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-5718/pods/dra-test-driver-4t66h/exec?command=mv&command=%2Fcdi%2Fdra-5718.k8s.io-5a51a9fa-8583-4096-bf6d-7dbcd4515991.json.tmp&command=%2Fcdi%2Fdra-5718.k8s.io-5a51a9fa-8583-4096-bf6d-7dbcd4515991.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:59:00.532387 66267 io.go:119] "Command completed" command=[mv /cdi/dra-5718.k8s.io-5a51a9fa-8583-4096-bf6d-7dbcd4515991.json.tmp /cdi/dra-5718.k8s.io-5a51a9fa-8583-4096-bf6d-7dbcd4515991.json] stdout="" stderr="" err=<nil> I0314 13:59:00.532515 66267 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-5718.k8s.io/test=claim-5a51a9fa-8583-4096-bf6d-7dbcd4515991],}" I0314 13:59:00.545691 66267 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" requestID=1 request="&NodePrepareResourceRequest{Namespace:dra-5718,ClaimUid:d704f95c-8acc-4b0d-872c-b1b7a762c28b,ClaimName:external-claim-other,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"kind-worker\"},}" STEP: creating CDI file /cdi/dra-5718-other.k8s.io-d704f95c-8acc-4b0d-872c-b1b7a762c28b.json on node kind-worker: {"cdiVersion":"0.3.0","kind":"dra-5718-other.k8s.io/test","devices":[{"name":"claim-d704f95c-8acc-4b0d-872c-b1b7a762c28b","containerEdits":{"env":["user_a=b"]}}]} - test/e2e/dra/deploy.go:217 @ 03/14/23 13:59:00.545 Mar 14 13:59:00.545: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:59:00.547: INFO: ExecWithOptions: Clientset creation Mar 14 13:59:00.547: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-5718/pods/dra-test-driver-other-xtlpg/exec?command=sh&command=-c&command=base64+-d+%3E%27%2Fcdi%2Fdra-5718-other.k8s.io-d704f95c-8acc-4b0d-872c-b1b7a762c28b.json.tmp%27+%3C%3CEOF%0AeyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTU3MTgtb3RoZXIuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZDcwNGY5NWMtOGFjYy00YjBkLTg3MmMtYjFiN2E3NjJjMjhiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19%0AEOF&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:59:00.710783 66267 io.go:119] "Command completed" command=< [sh -c base64 -d >'/cdi/dra-5718-other.k8s.io-d704f95c-8acc-4b0d-872c-b1b7a762c28b.json.tmp' <<EOF eyJjZGlWZXJzaW9uIjoiMC4zLjAiLCJraW5kIjoiZHJhLTU3MTgtb3RoZXIuazhzLmlvL3Rlc3QiLCJkZXZpY2VzIjpbeyJuYW1lIjoiY2xhaW0tZDcwNGY5NWMtOGFjYy00YjBkLTg3MmMtYjFiN2E3NjJjMjhiIiwiY29udGFpbmVyRWRpdHMiOnsiZW52IjpbInVzZXJfYT1iIl19fV19 EOF] > stdout="" stderr="" err=<nil> Mar 14 13:59:00.710: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:59:00.711: INFO: ExecWithOptions: Clientset creation Mar 14 13:59:00.711: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-5718/pods/dra-test-driver-other-xtlpg/exec?command=mv&command=%2Fcdi%2Fdra-5718-other.k8s.io-d704f95c-8acc-4b0d-872c-b1b7a762c28b.json.tmp&command=%2Fcdi%2Fdra-5718-other.k8s.io-d704f95c-8acc-4b0d-872c-b1b7a762c28b.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:59:00.845231 66267 io.go:119] "Command completed" command=[mv /cdi/dra-5718-other.k8s.io-d704f95c-8acc-4b0d-872c-b1b7a762c28b.json.tmp /cdi/dra-5718-other.k8s.io-d704f95c-8acc-4b0d-872c-b1b7a762c28b.json] stdout="" stderr="" err=<nil> I0314 13:59:00.845301 66267 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" requestID=1 response="&NodePrepareResourceResponse{CdiDevices:[dra-5718-other.k8s.io/test=claim-d704f95c-8acc-4b0d-872c-b1b7a762c28b],}" < Exit [It] work - test/e2e/dra/dra.go:476 @ 03/14/23 13:59:03.835 (8.157s) > Enter [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:59:03.835 Mar 14 13:59:03.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/node/init/init.go:33 @ 03/14/23 13:59:03.868 (33ms) > Enter [DeferCleanup (Each)] multiple drivers - test/e2e/dra/dra.go:762 @ 03/14/23 13:59:03.868 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 13:59:03.907 STEP: deleting *v1.Pod dra-5718/tester-1 - test/e2e/dra/dra.go:780 @ 03/14/23 13:59:03.945 I0314 13:59:05.754680 66267 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-5718,ClaimUid:5a51a9fa-8583-4096-bf6d-7dbcd4515991,ClaimName:external-claim,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"kind-worker\"},}" STEP: deleting CDI file /cdi/dra-5718.k8s.io-5a51a9fa-8583-4096-bf6d-7dbcd4515991.json on node kind-worker - test/e2e/dra/deploy.go:221 @ 03/14/23 13:59:05.754 Mar 14 13:59:05.754: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:59:05.756: INFO: ExecWithOptions: Clientset creation Mar 14 13:59:05.756: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-5718/pods/dra-test-driver-4t66h/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-5718.k8s.io-5a51a9fa-8583-4096-bf6d-7dbcd4515991.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:59:05.972433 66267 io.go:119] "Command completed" command=[rm -rf /cdi/dra-5718.k8s.io-5a51a9fa-8583-4096-bf6d-7dbcd4515991.json] stdout="" stderr="" err=<nil> I0314 13:59:05.972515 66267 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" requestID=2 response="&NodeUnprepareResourceResponse{}" I0314 13:59:05.986563 66267 nonblockinggrpcserver.go:118] "kubelet plugin/dra: handling request" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" requestID=2 request="&NodeUnprepareResourceRequest{Namespace:dra-5718,ClaimUid:d704f95c-8acc-4b0d-872c-b1b7a762c28b,ClaimName:external-claim-other,ResourceHandle:{\"EnvVars\":{\"user_a\":\"b\"},\"NodeName\":\"kind-worker\"},}" STEP: deleting CDI file /cdi/dra-5718-other.k8s.io-d704f95c-8acc-4b0d-872c-b1b7a762c28b.json on node kind-worker - test/e2e/dra/deploy.go:221 @ 03/14/23 13:59:05.986 Mar 14 13:59:05.986: INFO: >>> kubeConfig: /root/.kube/config Mar 14 13:59:05.987: INFO: ExecWithOptions: Clientset creation Mar 14 13:59:05.987: INFO: ExecWithOptions: execute(POST https://127.0.0.1:34309/api/v1/namespaces/dra-5718/pods/dra-test-driver-other-xtlpg/exec?command=rm&command=-rf&command=%2Fcdi%2Fdra-5718-other.k8s.io-d704f95c-8acc-4b0d-872c-b1b7a762c28b.json&container=plugin&container=plugin&stderr=true&stdout=true) I0314 13:59:06.167612 66267 io.go:119] "Command completed" command=[rm -rf /cdi/dra-5718-other.k8s.io-d704f95c-8acc-4b0d-872c-b1b7a762c28b.json] stdout="" stderr="" err=<nil> I0314 13:59:06.167724 66267 nonblockinggrpcserver.go:129] "kubelet plugin/dra: handling request succeeded" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" requestID=2 response="&NodeUnprepareResourceResponse{}" STEP: deleting *v1alpha2.ResourceClaim dra-5718/external-claim - test/e2e/dra/dra.go:796 @ 03/14/23 13:59:08.068 STEP: deleting *v1alpha2.ResourceClaim dra-5718/external-claim-other - test/e2e/dra/dra.go:796 @ 03/14/23 13:59:08.079 E0314 13:59:08.087081 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:59:08.09 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 13:59:08.09 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 13:59:08.09 E0314 13:59:08.100320 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:08.102712 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:08.115148 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:08.128772 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:08.159622 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:08.159876 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:08.191748 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:08.209851 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:08.242835 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:08.300254 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:08.332713 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:08.472060 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:08.502419 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:08.800717 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:08.829770 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:09.450725 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:09.480474 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:10.736577 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:10.766690 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:13.301123 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:13.332737 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:18.426547 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:18.458217 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:28.671391 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:28.703276 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" E0314 13:59:49.156784 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" E0314 13:59:49.188518 66267 controller.go:345] "instance-other/resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim-other\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim-other" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:2, cap:2>: - metadata: creationTimestamp: "2023-03-14T13:58:55Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:59:08Z" finalizers: - dra-5718.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-5718.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:56Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:56Z" name: external-claim namespace: dra-5718 resourceVersion: "1327" uid: 5a51a9fa-8583-4096-bf6d-7dbcd4515991 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-5718-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":"kind-worker"}' driverName: dra-5718.k8s.io - metadata: creationTimestamp: "2023-03-14T13:58:55Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:59:08Z" finalizers: - dra-5718-other.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-5718-other.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:56Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:56Z" name: external-claim-other namespace: dra-5718 resourceVersion: "1328" uid: d704f95c-8acc-4b0d-872c-b1b7a762c28b spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-other-1 resourceClassName: dra-5718-other-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":"kind-worker"}' driverName: dra-5718-other.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:00:08.094 < Exit [DeferCleanup (Each)] multiple drivers - test/e2e/dra/dra.go:762 @ 03/14/23 14:00:08.094 (1m4.225s) > Enter [DeferCleanup (Each)] multiple drivers - test/e2e/dra/deploy.go:103 @ 03/14/23 14:00:08.094 I0314 14:00:08.095383 66267 controller.go:310] "instance-other/resource controller: Shutting down" driver="dra-5718-other.k8s.io" E0314 14:00:08.096862 66267 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" E0314 14:00:08.097970 66267 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-5718/dra-test-driver-other-mp779" E0314 14:00:08.099154 66267 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-5718/dra-test-driver-other-xtlpg" < Exit [DeferCleanup (Each)] multiple drivers - test/e2e/dra/deploy.go:103 @ 03/14/23 14:00:08.099 (5ms) > Enter [DeferCleanup (Each)] multiple drivers - deleting *v1.ReplicaSet: dra-5718/dra-test-driver-other | create.go:156 @ 03/14/23 14:00:08.099 < Exit [DeferCleanup (Each)] multiple drivers - deleting *v1.ReplicaSet: dra-5718/dra-test-driver-other | create.go:156 @ 03/14/23 14:00:08.112 (13ms) > Enter [DeferCleanup (Each)] multiple drivers - test/e2e/dra/dra.go:762 @ 03/14/23 14:00:08.112 STEP: delete pods and claims - test/e2e/dra/dra.go:773 @ 03/14/23 14:00:08.124 STEP: waiting for resources on kind-worker2 to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:00:08.156 STEP: waiting for resources on kind-worker to be unprepared - test/e2e/dra/dra.go:804 @ 03/14/23 14:00:08.156 STEP: waiting for claims to be deallocated and deleted - test/e2e/dra/dra.go:808 @ 03/14/23 14:00:08.156 E0314 14:00:30.121873 66267 controller.go:345] "resource controller: processing failed" err="remove allocation: ResourceClaim.resource.k8s.io \"external-claim\" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete" key="claim:dra-5718/external-claim" [FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:2, cap:2>: - metadata: creationTimestamp: "2023-03-14T13:58:55Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:59:08Z" finalizers: - dra-5718.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-5718.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:56Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:56Z" name: external-claim namespace: dra-5718 resourceVersion: "1327" uid: 5a51a9fa-8583-4096-bf6d-7dbcd4515991 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-5718-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":"kind-worker"}' driverName: dra-5718.k8s.io - metadata: creationTimestamp: "2023-03-14T13:58:55Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T13:59:08Z" finalizers: - dra-5718-other.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-5718-other.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T13:58:56Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T13:58:56Z" name: external-claim-other namespace: dra-5718 resourceVersion: "1328" uid: d704f95c-8acc-4b0d-872c-b1b7a762c28b spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-other-1 resourceClassName: dra-5718-other-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":"kind-worker"}' driverName: dra-5718-other.k8s.io to be empty In [DeferCleanup (Each)] at: test/e2e/dra/dra.go:815 @ 03/14/23 14:01:08.159 < Exit [DeferCleanup (Each)] multiple drivers - test/e2e/dra/dra.go:762 @ 03/14/23 14:01:08.159 (1m0.047s) > Enter [DeferCleanup (Each)] multiple drivers - test/e2e/dra/deploy.go:103 @ 03/14/23 14:01:08.159 I0314 14:01:08.159596 66267 controller.go:310] "resource controller: Shutting down" driver="dra-5718.k8s.io" E0314 14:01:08.161046 66267 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker" pod="dra-5718/dra-test-driver-4t66h" E0314 14:01:08.161206 66267 nonblockinggrpcserver.go:101] "kubelet plugin/dra: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-5718/dra-test-driver-f8m4d" E0314 14:01:08.161615 66267 nonblockinggrpcserver.go:101] "kubelet plugin/registrar: GRPC server failed" err="listening was stopped" node="kind-worker2" pod="dra-5718/dra-test-driver-f8m4d" < Exit [DeferCleanup (Each)] multiple drivers - test/e2e/dra/deploy.go:103 @ 03/14/23 14:01:08.161 (2ms) > Enter [DeferCleanup (Each)] multiple drivers - deleting *v1.ReplicaSet: dra-5718/dra-test-driver | create.go:156 @ 03/14/23 14:01:08.161 < Exit [DeferCleanup (Each)] multiple drivers - deleting *v1.ReplicaSet: dra-5718/dra-test-driver | create.go:156 @ 03/14/23 14:01:08.173 (12ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:01:08.173 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - test/e2e/framework/metrics/init/init.go:35 @ 03/14/23 14:01:08.173 (0s) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:01:08.173 STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:01:08.173 STEP: Collecting events from namespace "dra-5718". - test/e2e/framework/debug/dump.go:42 @ 03/14/23 14:01:08.173 STEP: Found 48 events. - test/e2e/framework/debug/dump.go:46 @ 03/14/23 14:01:08.178 Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-f8m4d Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-4t66h Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-4t66h: {default-scheduler } Scheduled: Successfully assigned dra-5718/dra-test-driver-4t66h to kind-worker Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:45 +0000 UTC - event for dra-test-driver-f8m4d: {default-scheduler } Scheduled: Successfully assigned dra-5718/dra-test-driver-f8m4d to kind-worker2 Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-4t66h: {kubelet kind-worker} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:46 +0000 UTC - event for dra-test-driver-f8m4d: {kubelet kind-worker2} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-4t66h: {kubelet kind-worker} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 179.777831ms (1.034911916s including waiting) Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-4t66h: {kubelet kind-worker} Created: Created container registrar Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-4t66h: {kubelet kind-worker} Started: Started container registrar Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-4t66h: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-4t66h: {kubelet kind-worker} Created: Created container plugin Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-f8m4d: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-f8m4d: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-f8m4d: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-f8m4d: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:47 +0000 UTC - event for dra-test-driver-f8m4d: {kubelet kind-worker2} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" in 155.908818ms (1.081559673s including waiting) Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-4t66h: {kubelet kind-worker} Started: Started container plugin Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:48 +0000 UTC - event for dra-test-driver-f8m4d: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for dra-test-driver-other: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-other-mp779 Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for dra-test-driver-other: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-other-xtlpg Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for dra-test-driver-other-mp779: {default-scheduler } Scheduled: Successfully assigned dra-5718/dra-test-driver-other-mp779 to kind-worker2 Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:51 +0000 UTC - event for dra-test-driver-other-xtlpg: {default-scheduler } Scheduled: Successfully assigned dra-5718/dra-test-driver-other-xtlpg to kind-worker Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-mp779: {kubelet kind-worker2} Created: Created container plugin Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-mp779: {kubelet kind-worker2} Started: Started container plugin Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-mp779: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-mp779: {kubelet kind-worker2} Started: Started container registrar Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-mp779: {kubelet kind-worker2} Created: Created container registrar Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-mp779: {kubelet kind-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-xtlpg: {kubelet kind-worker} Created: Created container plugin Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-xtlpg: {kubelet kind-worker} Started: Started container registrar Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-xtlpg: {kubelet kind-worker} Started: Started container plugin Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-xtlpg: {kubelet kind-worker} Created: Created container registrar Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-xtlpg: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:52 +0000 UTC - event for dra-test-driver-other-xtlpg: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:55 +0000 UTC - event for tester-1: {resource driver dra-5718.k8s.io } Failed: update unsuitable node status: Operation cannot be fulfilled on podschedulings.resource.k8s.io "tester-1": the object has been modified; please apply your changes to the latest version and try again Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:55 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to provide information Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:56 +0000 UTC - event for tester-1: {default-scheduler } FailedScheduling: running Reserve plugin "DynamicResources": waiting for resource driver to allocate resource Mar 14 14:01:08.178: INFO: At 2023-03-14 13:58:59 +0000 UTC - event for tester-1: {default-scheduler } Scheduled: Successfully assigned dra-5718/tester-1 to kind-worker Mar 14 14:01:08.178: INFO: At 2023-03-14 13:59:01 +0000 UTC - event for tester-1: {kubelet kind-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Mar 14 14:01:08.178: INFO: At 2023-03-14 13:59:01 +0000 UTC - event for tester-1: {kubelet kind-worker} Started: Started container with-resource Mar 14 14:01:08.178: INFO: At 2023-03-14 13:59:01 +0000 UTC - event for tester-1: {kubelet kind-worker} Created: Created container with-resource Mar 14 14:01:08.178: INFO: At 2023-03-14 13:59:04 +0000 UTC - event for tester-1: {kubelet kind-worker} Killing: Stopping container with-resource Mar 14 14:01:08.178: INFO: At 2023-03-14 13:59:08 +0000 UTC - event for external-claim: {resource driver dra-5718.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "external-claim" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:01:08.178: INFO: At 2023-03-14 13:59:08 +0000 UTC - event for external-claim-other: {resource driver dra-5718-other.k8s.io } Failed: remove allocation: ResourceClaim.resource.k8s.io "external-claim-other" is invalid: status.allocation: Forbidden: can only remove while marking a deallocation as complete Mar 14 14:01:08.178: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-other-mp779: {kubelet kind-worker2} Killing: Stopping container registrar Mar 14 14:01:08.178: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-other-mp779: {kubelet kind-worker2} Killing: Stopping container plugin Mar 14 14:01:08.178: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-other-xtlpg: {kubelet kind-worker} Killing: Stopping container plugin Mar 14 14:01:08.178: INFO: At 2023-03-14 14:00:08 +0000 UTC - event for dra-test-driver-other-xtlpg: {kubelet kind-worker} Killing: Stopping container registrar Mar 14 14:01:08.182: INFO: POD NODE PHASE GRACE CONDITIONS Mar 14 14:01:08.182: INFO: dra-test-driver-4t66h kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 14:01:08.182: INFO: dra-test-driver-f8m4d kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-03-14 13:58:45 +0000 UTC }] Mar 14 14:01:08.182: INFO: Mar 14 14:01:08.229: INFO: Logging node info for node kind-control-plane Mar 14 14:01:08.236: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7b0c8f1f-7d2e-4b5f-ab52-0e2399b9f764 438 0 2023-03-14 13:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-14 13:57:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:57:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-03-14 13:58:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:57:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:58:09 +0000 UTC,LastTransitionTime:2023-03-14 13:58:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e8e6b089f1f44ab8ef4a2bc879ddd73,SystemUUID:ee43f17b-1489-4ea4-bec5-b7916f4f1fb0,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:08.236: INFO: Logging kubelet events for node kind-control-plane Mar 14 14:01:08.242: INFO: Logging pods the kubelet thinks is on node kind-control-plane Mar 14 14:01:08.257: INFO: local-path-provisioner-687869657c-v9k2k started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.257: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 14 14:01:08.257: INFO: kube-proxy-fm2jh started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.257: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:08.257: INFO: coredns-ffc665895-mnldc started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.257: INFO: Container coredns ready: true, restart count 0 Mar 14 14:01:08.257: INFO: etcd-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.257: INFO: Container etcd ready: true, restart count 0 Mar 14 14:01:08.257: INFO: kube-apiserver-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.257: INFO: Container kube-apiserver ready: true, restart count 0 Mar 14 14:01:08.257: INFO: kindnet-nx87k started at 2023-03-14 13:58:06 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.257: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:08.257: INFO: coredns-ffc665895-vmqts started at 2023-03-14 13:58:09 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.257: INFO: Container coredns ready: true, restart count 0 Mar 14 14:01:08.257: INFO: kube-controller-manager-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.257: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 14 14:01:08.257: INFO: kube-scheduler-kind-control-plane started at 2023-03-14 13:57:54 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.257: INFO: Container kube-scheduler ready: true, restart count 0 Mar 14 14:01:08.324: INFO: Latency metrics for node kind-control-plane Mar 14 14:01:08.324: INFO: Logging node info for node kind-worker Mar 14 14:01:08.328: INFO: Node Info: &Node{ObjectMeta:{kind-worker 9cca062e-b3b4-4ef2-9c10-412063b4ece4 1368 0 2023-03-14 13:58:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:58:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-03-14 13:58:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:59:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:13 +0000 UTC,LastTransitionTime:2023-03-14 13:58:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5a3b3841831c42fc96e5cb187f537f04,SystemUUID:ed67c939-37e3-47de-ab06-0144304a5aa1,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:08.329: INFO: Logging kubelet events for node kind-worker Mar 14 14:01:08.334: INFO: Logging pods the kubelet thinks is on node kind-worker Mar 14 14:01:08.348: INFO: dra-test-driver-t74z8 started at 2023-03-14 14:00:07 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.349: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.349: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.349: INFO: dra-test-driver-t8wgt started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.349: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.349: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.349: INFO: dra-test-driver-8jmwc started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.349: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.349: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.349: INFO: kindnet-fzdn9 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.349: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:08.349: INFO: kube-proxy-l4q98 started at 2023-03-14 13:58:12 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.349: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:08.349: INFO: dra-test-driver-6zxqg started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.349: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.349: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.349: INFO: dra-test-driver-twq8l started at 2023-03-14 14:00:03 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.349: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.349: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.349: INFO: dra-test-driver-bvfg8 started at 2023-03-14 14:00:02 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.349: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.349: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.349: INFO: dra-test-driver-4t66h started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.349: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.349: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.349: INFO: dra-test-driver-wfhjf started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.349: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.349: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.434: INFO: Latency metrics for node kind-worker Mar 14 14:01:08.434: INFO: Logging node info for node kind-worker2 Mar 14 14:01:08.438: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 49a194e2-5e70-437e-aa3c-3a490ff23c54 1358 0 2023-03-14 13:58:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-14 13:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-14 13:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-03-14 13:59:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441377280 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-03-14 13:59:11 +0000 UTC,LastTransitionTime:2023-03-14 13:58:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:48810b9a669b47cea51d5fa0f821cf84,SystemUUID:603f9452-86ad-460a-83be-e3f10d4a362c,BootID:771a3503-811f-46fb-a0c5-0c1da45ca7d6,KernelVersion:5.4.0-1086-gke,OSImage:Ubuntu 22.04.2 LTS,ContainerRuntimeVersion:containerd://1.6.0-830-g34d078e99,KubeletVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,KubeProxyVersion:v1.27.0-alpha.3.565+2cd610bff27ec6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:8e87338602f544a95ab9ec0a52dba6b9eb6a02d200f37a4f0a11185b2da5f0de registry.k8s.io/kube-apiserver:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:118168682,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:c5375ae1edeef1451e0af865362929b65fce0b4fa12e67752276037af4e1de07 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:110398212,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:d89b5ac2026d221a4e96634000ca0690532a65bbe1ed59ad9488fcefd91a8f46 registry.k8s.io/kube-proxy:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:65586530,},ContainerImage{Names:[docker.io/library/import-2023-03-14@sha256:b5348048bd173e3dc8bf630d152623178fc1d51da38a038dd600cca6532db5e0 registry.k8s.io/kube-scheduler:v1.27.0-alpha.3.565_2cd610bff27ec6],SizeBytes:56314615,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230227-15197099],SizeBytes:26506530,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17660818,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230227-8863bcd1],SizeBytes:2898085,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 14 14:01:08.438: INFO: Logging kubelet events for node kind-worker2 Mar 14 14:01:08.445: INFO: Logging pods the kubelet thinks is on node kind-worker2 Mar 14 14:01:08.454: INFO: kube-proxy-vnlx8 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.454: INFO: Container kube-proxy ready: true, restart count 0 Mar 14 14:01:08.454: INFO: dra-test-driver-c9jh8 started at 2023-03-14 14:00:03 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.454: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.454: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.454: INFO: dra-test-driver-w277j started at 2023-03-14 14:00:02 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.454: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.454: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.454: INFO: dra-test-driver-f8m4d started at 2023-03-14 13:58:45 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.454: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.454: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.454: INFO: kindnet-5qdz7 started at 2023-03-14 13:58:11 +0000 UTC (0+1 container statuses recorded) Mar 14 14:01:08.454: INFO: Container kindnet-cni ready: true, restart count 0 Mar 14 14:01:08.454: INFO: dra-test-driver-v6g2p started at 2023-03-14 14:00:07 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.454: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.454: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.454: INFO: dra-test-driver-jmtw2 started at 2023-03-14 14:00:00 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.454: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.454: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.454: INFO: dra-test-driver-ss4k7 started at 2023-03-14 14:00:01 +0000 UTC (0+2 container statuses recorded) Mar 14 14:01:08.454: INFO: Container plugin ready: true, restart count 0 Mar 14 14:01:08.454: INFO: Container registrar ready: true, restart count 0 Mar 14 14:01:08.520: INFO: Latency metrics for node kind-worker2 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:288 @ 03/14/23 14:01:08.52 (347ms) < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - dump namespaces | framework.go:209 @ 03/14/23 14:01:08.52 (347ms) > Enter [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:01:08.52 STEP: Destroying namespace "dra-5718" for this suite. - test/e2e/framework/framework.go:351 @ 03/14/23 14:01:08.52 < Exit [DeferCleanup (Each)] [sig-node] DRA [Feature:DynamicResourceAllocation] - tear down framework | framework.go:206 @ 03/14/23 14:01:08.528 (8ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:01:08.529 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 03/14/23 14:01:08.529 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sDRA\s\[Feature\:DynamicResourceAllocation\]\smultiple\snodes\sreallocation\sworks$'
[FAILED] Timed out after 60.001s. claims in the namespaces Expected <[]v1alpha2.ResourceClaim | len:3, cap:4>: - metadata: creationTimestamp: "2023-03-14T14:00:07Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:00:43Z" finalizers: - dra-1317.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"dra-1317.k8s.io/deletion-protection": {} f:spec: f:allocationMode: {} f:parametersRef: .: {} f:kind: {} f:name: {} f:resourceClassName: {} manager: e2e.test operation: Update time: "2023-03-14T14:00:25Z" - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:status: f:allocation: .: {} f:availableOnNodes: {} f:context: {} f:driverName: {} manager: e2e.test operation: Update subresource: status time: "2023-03-14T14:00:25Z" name: external-claim namespace: dra-1317 resourceVersion: "2477" uid: 43c7a8d7-e50d-4386-b7c7-7309aeb881c0 spec: allocationMode: WaitForFirstConsumer parametersRef: kind: ConfigMap name: parameters-1 resourceClassName: dra-1317-class status: allocation: availableOnNodes: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kind-worker context: - data: '{"EnvVars":{"user_a":"b"},"NodeName":"kind-worker"}' driverName: dra-1317.k8s.io - metadata: creationTimestamp: "2023-03-14T14:00:07Z" deletionGracePeriodSeconds: 0 deletionTimestamp: "2023-03-14T14:00:43Z" finalizers: - dra-1317.k8s.io/deletion-protection managedFields: - apiVersion: resource.k8s.io/v1alpha2 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {}