Recent runs || View in Spyglass
Result | SUCCESS |
Tests | 8 failed / 40 succeeded |
Started | |
Elapsed | 4h31m |
Revision | main |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\s\[Feature\:HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[Serial\]\s\[Slow\]\sDeployment\s\(Pod\sResource\)\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sthen\sfrom\s3\spods\sto\s5\spods\susing\sAverage\sUtilization\sfor\saggregation$'
test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00348fe68, {0x75e2251?, 0xc00089a7e0?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c8a66, 0xa}}, 0xc000b88f00) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75e2251?, 0x62b7ee5?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c8a66, 0xa}}, {0x75b659b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 +0x88from junit.kubetest.01.xml
[BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/20/22 01:12:43.429�[0m Nov 20 01:12:43.429: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/20/22 01:12:43.43�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/20/22 01:12:43.528�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/20/22 01:12:43.583�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:49 Nov 20 01:12:43.638: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/20/22 01:12:43.639�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-7438 �[38;5;243m11/20/22 01:12:43.681�[0m I1120 01:12:43.715510 15 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-7438, replica count: 1 I1120 01:12:53.766910 15 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/20/22 01:12:53.766�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-7438 �[38;5;243m11/20/22 01:12:53.808�[0m I1120 01:12:53.841560 15 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-7438, replica count: 1 I1120 01:13:03.892526 15 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 20 01:13:08.892: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 20 01:13:08.922: INFO: RC test-deployment: consume 250 millicores in total Nov 20 01:13:08.922: INFO: RC test-deployment: setting consumption to 250 millicores in total Nov 20 01:13:08.922: INFO: RC test-deployment: consume 0 MB in total Nov 20 01:13:08.922: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:13:08.922: INFO: RC test-deployment: disabling mem consumption Nov 20 01:13:08.922: INFO: RC test-deployment: consume custom metric 0 in total Nov 20 01:13:08.922: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:13:08.922: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 20 01:13:08.989: INFO: waiting for 3 replicas (current: 1) Nov 20 01:13:29.018: INFO: waiting for 3 replicas (current: 1) Nov 20 01:13:45.024: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:13:45.024: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:13:49.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:14:09.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:14:15.067: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:14:15.067: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:14:29.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:14:48.115: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:14:48.115: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:14:49.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:15:09.022: INFO: waiting for 3 replicas (current: 2) Nov 20 01:15:18.155: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:15:18.155: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:15:29.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:15:48.197: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:15:48.197: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:15:49.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:16:09.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:16:18.237: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:16:18.237: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:16:29.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:16:48.276: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:16:48.276: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:16:49.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:17:09.018: INFO: waiting for 3 replicas (current: 2) Nov 20 01:17:18.316: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:17:18.316: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:17:29.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:17:48.360: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:17:48.360: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:17:49.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:18:09.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:18:18.400: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:18:18.400: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:18:29.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:18:48.439: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:18:48.439: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:18:49.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:19:09.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:19:18.480: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:19:18.480: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:19:29.022: INFO: waiting for 3 replicas (current: 2) Nov 20 01:19:48.521: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:19:48.521: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:19:49.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:20:09.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:20:18.561: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:20:18.561: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:20:29.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:20:48.602: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:20:48.603: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:20:49.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:21:09.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:21:18.643: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:21:18.643: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:21:29.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:21:48.682: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:21:48.682: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:21:49.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:22:09.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:22:18.726: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:22:18.726: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:22:29.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:22:48.766: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:22:48.767: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:22:49.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:23:09.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:23:18.804: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:23:18.805: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:23:29.021: INFO: waiting for 3 replicas (current: 2) Nov 20 01:23:48.847: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:23:48.847: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:23:49.021: INFO: waiting for 3 replicas (current: 2) Nov 20 01:24:09.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:24:18.886: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:24:18.887: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:24:29.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:24:48.936: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:24:48.936: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:24:49.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:25:09.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:25:18.979: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:25:18.979: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:25:29.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:25:49.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:25:49.022: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:25:49.022: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:26:09.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:26:19.061: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:26:19.062: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:26:29.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:26:49.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:26:49.102: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:26:49.102: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:27:09.021: INFO: waiting for 3 replicas (current: 2) Nov 20 01:27:19.142: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:27:19.143: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:27:29.019: INFO: waiting for 3 replicas (current: 2) Nov 20 01:27:49.020: INFO: waiting for 3 replicas (current: 2) Nov 20 01:27:49.185: INFO: RC test-deployment: sending request to consume 250 millicores Nov 20 01:27:49.185: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-7438/services/test-deployment-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 01:28:09.021: INFO: waiting for 3 replicas (current: 2) Nov 20 01:28:09.051: INFO: waiting for 3 replicas (current: 2) Nov 20 01:28:09.051: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001eb910>: { s: "timed out waiting for the condition", } Nov 20 01:28:09.051: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00348fe68, {0x75e2251?, 0xc00089a7e0?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c8a66, 0xa}}, 0xc000b88f00) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75e2251?, 0x62b7ee5?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c8a66, 0xa}}, {0x75b659b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.1.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/20/22 01:28:09.086�[0m Nov 20 01:28:09.086: INFO: RC test-deployment: stopping metric consumer Nov 20 01:28:09.086: INFO: RC test-deployment: stopping mem consumer Nov 20 01:28:09.086: INFO: RC test-deployment: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-7438, will wait for the garbage collector to delete the pods �[38;5;243m11/20/22 01:28:19.087�[0m Nov 20 01:28:19.198: INFO: Deleting Deployment.apps test-deployment took: 32.78162ms Nov 20 01:28:19.299: INFO: Terminating Deployment.apps test-deployment pods took: 100.940973ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-7438, will wait for the garbage collector to delete the pods �[38;5;243m11/20/22 01:28:21.858�[0m Nov 20 01:28:21.973: INFO: Deleting ReplicationController test-deployment-ctrl took: 35.728143ms Nov 20 01:28:22.074: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 101.198112ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 20 01:28:23.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/20/22 01:28:23.862�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-7438". �[38;5;243m11/20/22 01:28:23.862�[0m �[1mSTEP:�[0m Found 23 events. �[38;5;243m11/20/22 01:28:23.895�[0m Nov 20 01:28:23.895: INFO: At 2022-11-20 01:12:43 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 1 Nov 20 01:28:23.895: INFO: At 2022-11-20 01:12:43 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-wlkzj Nov 20 01:28:23.895: INFO: At 2022-11-20 01:12:43 +0000 UTC - event for test-deployment-54fb67b787-wlkzj: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7438/test-deployment-54fb67b787-wlkzj to capz-conf-j95hl Nov 20 01:28:23.895: INFO: At 2022-11-20 01:12:46 +0000 UTC - event for test-deployment-54fb67b787-wlkzj: {kubelet capz-conf-j95hl} Pulling: Pulling image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" Nov 20 01:28:23.896: INFO: At 2022-11-20 01:12:47 +0000 UTC - event for test-deployment-54fb67b787-wlkzj: {kubelet capz-conf-j95hl} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" in 1.2372691s (1.2377645s including waiting) Nov 20 01:28:23.896: INFO: At 2022-11-20 01:12:47 +0000 UTC - event for test-deployment-54fb67b787-wlkzj: {kubelet capz-conf-j95hl} Created: Created container test-deployment Nov 20 01:28:23.896: INFO: At 2022-11-20 01:12:49 +0000 UTC - event for test-deployment-54fb67b787-wlkzj: {kubelet capz-conf-j95hl} Started: Started container test-deployment Nov 20 01:28:23.896: INFO: At 2022-11-20 01:12:53 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-2fk5c Nov 20 01:28:23.896: INFO: At 2022-11-20 01:12:53 +0000 UTC - event for test-deployment-ctrl-2fk5c: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7438/test-deployment-ctrl-2fk5c to capz-conf-clckq Nov 20 01:28:23.896: INFO: At 2022-11-20 01:12:56 +0000 UTC - event for test-deployment-ctrl-2fk5c: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 20 01:28:23.896: INFO: At 2022-11-20 01:12:57 +0000 UTC - event for test-deployment-ctrl-2fk5c: {kubelet capz-conf-clckq} Created: Created container test-deployment-ctrl Nov 20 01:28:23.896: INFO: At 2022-11-20 01:12:58 +0000 UTC - event for test-deployment-ctrl-2fk5c: {kubelet capz-conf-clckq} Started: Started container test-deployment-ctrl Nov 20 01:28:23.896: INFO: At 2022-11-20 01:13:39 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 20 01:28:23.896: INFO: At 2022-11-20 01:13:39 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 2 from 1 Nov 20 01:28:23.896: INFO: At 2022-11-20 01:13:39 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-wrws4 Nov 20 01:28:23.896: INFO: At 2022-11-20 01:13:39 +0000 UTC - event for test-deployment-54fb67b787-wrws4: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-7438/test-deployment-54fb67b787-wrws4 to capz-conf-clckq Nov 20 01:28:23.896: INFO: At 2022-11-20 01:13:41 +0000 UTC - event for test-deployment-54fb67b787-wrws4: {kubelet capz-conf-clckq} Pulling: Pulling image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" Nov 20 01:28:23.896: INFO: At 2022-11-20 01:13:43 +0000 UTC - event for test-deployment-54fb67b787-wrws4: {kubelet capz-conf-clckq} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" in 1.5482711s (1.5482711s including waiting) Nov 20 01:28:23.896: INFO: At 2022-11-20 01:13:43 +0000 UTC - event for test-deployment-54fb67b787-wrws4: {kubelet capz-conf-clckq} Created: Created container test-deployment Nov 20 01:28:23.896: INFO: At 2022-11-20 01:13:45 +0000 UTC - event for test-deployment-54fb67b787-wrws4: {kubelet capz-conf-clckq} Started: Started container test-deployment Nov 20 01:28:23.896: INFO: At 2022-11-20 01:28:19 +0000 UTC - event for test-deployment-54fb67b787-wlkzj: {kubelet capz-conf-j95hl} Killing: Stopping container test-deployment Nov 20 01:28:23.896: INFO: At 2022-11-20 01:28:19 +0000 UTC - event for test-deployment-54fb67b787-wrws4: {kubelet capz-conf-clckq} Killing: Stopping container test-deployment Nov 20 01:28:23.896: INFO: At 2022-11-20 01:28:21 +0000 UTC - event for test-deployment-ctrl-2fk5c: {kubelet capz-conf-clckq} Killing: Stopping container test-deployment-ctrl Nov 20 01:28:23.924: INFO: POD NODE PHASE GRACE CONDITIONS Nov 20 01:28:23.924: INFO: Nov 20 01:28:23.959: INFO: Logging node info for node capz-conf-clckq Nov 20 01:28:23.988: INFO: Node Info: &Node{ObjectMeta:{capz-conf-clckq 7b0dbe9f-6e88-4c01-99b1-2465612a0daf 2744 0 2022-11-20 01:10:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-clckq kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-95kvw cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.216.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:e4:64:fe volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:10:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:10:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:10:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-20 01:11:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-20 01:24:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-clckq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 01:24:22 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 01:24:22 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 01:24:22 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 01:24:22 +0000 UTC,LastTransitionTime:2022-11-20 01:10:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-clckq,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-clckq,SystemUUID:14041CED-10D5-4B34-9D4C-344B56A7FFCF,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 01:28:23.989: INFO: Logging kubelet events for node capz-conf-clckq Nov 20 01:28:24.019: INFO: Logging pods the kubelet thinks is on node capz-conf-clckq Nov 20 01:28:24.074: INFO: containerd-logger-g67b6 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.074: INFO: Container containerd-logger ready: true, restart count 0 Nov 20 01:28:24.074: INFO: kube-proxy-windows-g2j89 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.074: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 01:28:24.074: INFO: csi-proxy-6bzv9 started at 2022-11-20 01:10:37 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.074: INFO: Container csi-proxy ready: true, restart count 0 Nov 20 01:28:24.074: INFO: calico-node-windows-v42gv started at 2022-11-20 01:10:05 +0000 UTC (1+2 container statuses recorded) Nov 20 01:28:24.074: INFO: Init container install-cni ready: true, restart count 0 Nov 20 01:28:24.074: INFO: Container calico-node-felix ready: true, restart count 1 Nov 20 01:28:24.074: INFO: Container calico-node-startup ready: true, restart count 0 Nov 20 01:28:24.265: INFO: Latency metrics for node capz-conf-clckq Nov 20 01:28:24.266: INFO: Logging node info for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 01:28:24.294: INFO: Node Info: &Node{ObjectMeta:{capz-conf-fmlvhp-control-plane-b26jb c66af1fa-58b8-4558-8db4-48fd044f3e9e 2651 0 2022-11-20 01:06:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-fmlvhp-control-plane-b26jb kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-2] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-control-plane-vnvbt cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.89.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-20 01:06:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-20 01:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-20 01:07:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-20 01:23:22 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-fmlvhp-control-plane-b26jb,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-20 01:07:40 +0000 UTC,LastTransitionTime:2022-11-20 01:07:40 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 01:23:22 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 01:23:22 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 01:23:22 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 01:23:22 +0000 UTC,LastTransitionTime:2022-11-20 01:07:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-fmlvhp-control-plane-b26jb,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dd4b205b73c3437f8d4072eaa7e987bd,SystemUUID:6f6cc87d-f984-fb40-b2c2-f407cd2b06d2,BootID:db6fac5b-4561-4119-aa40-0dfa37daf137,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:135160275,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:124990265,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:57660216,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 01:28:24.295: INFO: Logging kubelet events for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 01:28:24.326: INFO: Logging pods the kubelet thinks is on node capz-conf-fmlvhp-control-plane-b26jb Nov 20 01:28:24.372: INFO: metrics-server-c9574f845-dwd4x started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.372: INFO: Container metrics-server ready: true, restart count 0 Nov 20 01:28:24.372: INFO: calico-kube-controllers-657b584867-kprw6 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.372: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 20 01:28:24.372: INFO: kube-scheduler-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.372: INFO: Container kube-scheduler ready: true, restart count 0 Nov 20 01:28:24.373: INFO: kube-proxy-grwp5 started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.373: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 01:28:24.373: INFO: calico-node-2d9f6 started at 2022-11-20 01:07:18 +0000 UTC (2+1 container statuses recorded) Nov 20 01:28:24.373: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 20 01:28:24.373: INFO: Init container install-cni ready: true, restart count 0 Nov 20 01:28:24.373: INFO: Container calico-node ready: true, restart count 0 Nov 20 01:28:24.373: INFO: coredns-787d4945fb-w8th2 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.373: INFO: Container coredns ready: true, restart count 0 Nov 20 01:28:24.373: INFO: etcd-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.373: INFO: Container etcd ready: true, restart count 0 Nov 20 01:28:24.373: INFO: kube-apiserver-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.373: INFO: Container kube-apiserver ready: true, restart count 0 Nov 20 01:28:24.373: INFO: kube-controller-manager-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.373: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 20 01:28:24.373: INFO: coredns-787d4945fb-jnvnw started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.373: INFO: Container coredns ready: true, restart count 0 Nov 20 01:28:24.517: INFO: Latency metrics for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 01:28:24.517: INFO: Logging node info for node capz-conf-j95hl Nov 20 01:28:24.546: INFO: Node Info: &Node{ObjectMeta:{capz-conf-j95hl 9874c50e-dbb9-48a0-a3e6-e1158b58eb2b 2668 0 2022-11-20 01:09:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-j95hl kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-9kkk6 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.119.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:a2:fd:fd volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:09:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:09:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-20 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-20 01:23:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-j95hl,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 01:23:32 +0000 UTC,LastTransitionTime:2022-11-20 01:09:13 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 01:23:32 +0000 UTC,LastTransitionTime:2022-11-20 01:09:13 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 01:23:32 +0000 UTC,LastTransitionTime:2022-11-20 01:09:13 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 01:23:32 +0000 UTC,LastTransitionTime:2022-11-20 01:09:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-j95hl,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-j95hl,SystemUUID:1EDFC854-811A-4EAC-947F-A7208BD291AA,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 01:28:24.546: INFO: Logging kubelet events for node capz-conf-j95hl Nov 20 01:28:24.576: INFO: Logging pods the kubelet thinks is on node capz-conf-j95hl Nov 20 01:28:24.624: INFO: calico-node-windows-6xjrh started at 2022-11-20 01:09:14 +0000 UTC (1+2 container statuses recorded) Nov 20 01:28:24.625: INFO: Init container install-cni ready: true, restart count 0 Nov 20 01:28:24.625: INFO: Container calico-node-felix ready: true, restart count 1 Nov 20 01:28:24.625: INFO: Container calico-node-startup ready: true, restart count 0 Nov 20 01:28:24.625: INFO: csi-proxy-8gwl4 started at 2022-11-20 01:09:37 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.625: INFO: Container csi-proxy ready: true, restart count 0 Nov 20 01:28:24.625: INFO: kube-proxy-windows-p95gh started at 2022-11-20 01:09:14 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.625: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 01:28:24.625: INFO: containerd-logger-hbjlt started at 2022-11-20 01:09:14 +0000 UTC (0+1 container statuses recorded) Nov 20 01:28:24.625: INFO: Container containerd-logger ready: true, restart count 0 Nov 20 01:28:24.766: INFO: Latency metrics for node capz-conf-j95hl [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-7438" for this suite. �[38;5;243m11/20/22 01:28:24.766�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\s\[Feature\:HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[Serial\]\s\[Slow\]\sReplicaSet\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sthen\sfrom\s3\spods\sto\s5\spods$'
test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc003969e68, {0x75b5638?, 0xc003384120?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c9646, 0xa}}, 0xc000b88f00) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75b5638?, 0x62b7ee5?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c9646, 0xa}}, {0x75b659b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:71 +0x88from junit.kubetest.01.xml
[BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/20/22 04:01:11.496�[0m Nov 20 04:01:11.496: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/20/22 04:01:11.498�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/20/22 04:01:11.585�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/20/22 04:01:11.639�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods test/e2e/autoscaling/horizontal_pod_autoscaling.go:70 Nov 20 04:01:11.695: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[38;5;243m11/20/22 04:01:11.696�[0m �[1mSTEP:�[0m Creating replicaset rs in namespace horizontal-pod-autoscaling-8850 �[38;5;243m11/20/22 04:01:11.736�[0m �[1mSTEP:�[0m creating replicaset rs in namespace horizontal-pod-autoscaling-8850 �[38;5;243m11/20/22 04:01:11.736�[0m I1120 04:01:11.783870 15 runners.go:193] Created replica set with name: rs, namespace: horizontal-pod-autoscaling-8850, replica count: 1 I1120 04:01:21.835793 15 runners.go:193] rs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/20/22 04:01:21.835�[0m �[1mSTEP:�[0m creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-8850 �[38;5;243m11/20/22 04:01:21.879�[0m I1120 04:01:21.912819 15 runners.go:193] Created replication controller with name: rs-ctrl, namespace: horizontal-pod-autoscaling-8850, replica count: 1 I1120 04:01:31.964416 15 runners.go:193] rs-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 20 04:01:36.967: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Nov 20 04:01:36.996: INFO: RC rs: consume 250 millicores in total Nov 20 04:01:36.996: INFO: RC rs: setting consumption to 250 millicores in total Nov 20 04:01:36.996: INFO: RC rs: consume 0 MB in total Nov 20 04:01:36.996: INFO: RC rs: disabling mem consumption Nov 20 04:01:36.996: INFO: RC rs: consume custom metric 0 in total Nov 20 04:01:36.996: INFO: RC rs: disabling consumption of custom metric QPS Nov 20 04:01:37.056: INFO: waiting for 3 replicas (current: 1) Nov 20 04:01:57.089: INFO: waiting for 3 replicas (current: 1) Nov 20 04:02:06.998: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:02:06.999: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:02:17.084: INFO: waiting for 3 replicas (current: 1) Nov 20 04:02:37.086: INFO: waiting for 3 replicas (current: 1) Nov 20 04:02:40.056: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:02:40.057: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:02:57.085: INFO: waiting for 3 replicas (current: 1) Nov 20 04:03:10.095: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:03:10.095: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:03:17.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:03:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:03:40.137: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:03:40.137: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:03:57.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:04:10.178: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:04:10.179: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:04:17.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:04:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:04:40.219: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:04:40.220: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:04:57.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:05:10.260: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:05:10.260: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:05:17.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:05:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:05:40.301: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:05:40.301: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:05:57.084: INFO: waiting for 3 replicas (current: 2) Nov 20 04:06:10.341: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:06:10.341: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:06:17.088: INFO: waiting for 3 replicas (current: 2) Nov 20 04:06:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:06:40.380: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:06:40.380: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:06:57.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:07:10.421: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:07:10.422: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:07:17.088: INFO: waiting for 3 replicas (current: 2) Nov 20 04:07:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:07:40.463: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:07:40.463: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:07:57.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:08:10.503: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:08:10.503: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:08:17.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:08:37.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:08:40.542: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:08:40.542: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:08:57.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:09:10.584: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:09:10.584: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:09:17.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:09:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:09:40.625: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:09:40.625: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:09:57.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:10:10.665: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:10:10.665: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:10:17.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:10:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:10:40.704: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:10:40.704: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:10:57.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:11:10.742: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:11:10.742: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:11:17.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:11:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:11:40.783: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:11:40.783: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:11:57.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:12:10.822: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:12:10.822: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:12:17.087: INFO: waiting for 3 replicas (current: 2) Nov 20 04:12:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:12:40.862: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:12:40.863: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:12:57.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:13:10.901: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:13:10.901: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:13:17.088: INFO: waiting for 3 replicas (current: 2) Nov 20 04:13:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:13:40.940: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:13:40.940: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:13:57.084: INFO: waiting for 3 replicas (current: 2) Nov 20 04:14:10.977: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:14:10.978: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:14:17.089: INFO: waiting for 3 replicas (current: 2) Nov 20 04:14:37.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:14:41.027: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:14:41.027: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:14:57.085: INFO: waiting for 3 replicas (current: 2) Nov 20 04:15:11.069: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:15:11.070: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:15:17.088: INFO: waiting for 3 replicas (current: 2) Nov 20 04:15:37.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:15:41.114: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:15:41.114: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:15:57.086: INFO: waiting for 3 replicas (current: 2) Nov 20 04:16:11.157: INFO: RC rs: sending request to consume 250 millicores Nov 20 04:16:11.157: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-8850/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Nov 20 04:16:17.088: INFO: waiting for 3 replicas (current: 2) Nov 20 04:16:37.088: INFO: waiting for 3 replicas (current: 2) Nov 20 04:16:37.117: INFO: waiting for 3 replicas (current: 2) Nov 20 04:16:37.117: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001eb910>: { s: "timed out waiting for the condition", } Nov 20 04:16:37.117: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc003969e68, {0x75b5638?, 0xc003384120?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c9646, 0xa}}, 0xc000b88f00) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75b5638?, 0x62b7ee5?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c9646, 0xa}}, {0x75b659b, 0x3}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.3.1() test/e2e/autoscaling/horizontal_pod_autoscaling.go:71 +0x88 �[1mSTEP:�[0m Removing consuming RC rs �[38;5;243m11/20/22 04:16:37.151�[0m Nov 20 04:16:37.151: INFO: RC rs: stopping metric consumer Nov 20 04:16:37.151: INFO: RC rs: stopping CPU consumer Nov 20 04:16:37.151: INFO: RC rs: stopping mem consumer �[1mSTEP:�[0m deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-8850, will wait for the garbage collector to delete the pods �[38;5;243m11/20/22 04:16:47.151�[0m Nov 20 04:16:47.263: INFO: Deleting ReplicaSet.apps rs took: 32.5318ms Nov 20 04:16:47.364: INFO: Terminating ReplicaSet.apps rs pods took: 100.353701ms �[1mSTEP:�[0m deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-8850, will wait for the garbage collector to delete the pods �[38;5;243m11/20/22 04:16:50.517�[0m Nov 20 04:16:50.628: INFO: Deleting ReplicationController rs-ctrl took: 31.721884ms Nov 20 04:16:50.728: INFO: Terminating ReplicationController rs-ctrl pods took: 100.55643ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/node/init/init.go:32 Nov 20 04:16:52.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Nov 20 04:16:52.813: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:16:54.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:16:56.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:16:58.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:00.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:02.848: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:04.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:06.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:08.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:10.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:12.843: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:14.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:16.843: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:18.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:20.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:22.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:24.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:26.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:28.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:30.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:32.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:34.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:36.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:38.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:40.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:42.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:44.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:46.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:48.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:50.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:52.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:54.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:56.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:17:58.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:00.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:02.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:04.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:06.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:08.843: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:10.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:12.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:14.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:16.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:18.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:20.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:22.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:24.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:26.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:28.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:30.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:32.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:34.843: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:36.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:38.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:40.843: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:42.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:44.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:46.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:48.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:50.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:52.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:54.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:56.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:18:58.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:00.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:02.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:04.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:06.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:08.848: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:10.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:12.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:14.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:16.850: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:18.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:20.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:22.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:24.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:26.845: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:28.843: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:30.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:32.843: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:34.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:36.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:38.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:40.843: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:42.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:44.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:46.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:48.846: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:50.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:52.844: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:19:52.875: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/20/22 04:19:52.875�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-8850". �[38;5;243m11/20/22 04:19:52.876�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/20/22 04:19:52.905�[0m Nov 20 04:19:52.905: INFO: At 2022-11-20 04:01:11 +0000 UTC - event for rs: {replicaset-controller } SuccessfulCreate: Created pod: rs-84whx Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:11 +0000 UTC - event for rs-84whx: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-8850/rs-84whx to capz-conf-clckq Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:14 +0000 UTC - event for rs-84whx: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:14 +0000 UTC - event for rs-84whx: {kubelet capz-conf-clckq} Created: Created container rs Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:16 +0000 UTC - event for rs-84whx: {kubelet capz-conf-clckq} Started: Started container rs Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:21 +0000 UTC - event for rs-ctrl: {replication-controller } SuccessfulCreate: Created pod: rs-ctrl-98kqr Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:21 +0000 UTC - event for rs-ctrl-98kqr: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-8850/rs-ctrl-98kqr to capz-conf-clckq Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:24 +0000 UTC - event for rs-ctrl-98kqr: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:25 +0000 UTC - event for rs-ctrl-98kqr: {kubelet capz-conf-clckq} Created: Created container rs-ctrl Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:26 +0000 UTC - event for rs-ctrl-98kqr: {kubelet capz-conf-clckq} Started: Started container rs-ctrl Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:52 +0000 UTC - event for rs: {horizontal-pod-autoscaler } FailedGetResourceMetric: failed to get cpu utilization: did not receive metrics for any ready pods Nov 20 04:19:52.906: INFO: At 2022-11-20 04:01:52 +0000 UTC - event for rs: {horizontal-pod-autoscaler } FailedComputeMetricsReplicas: invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for any ready pods Nov 20 04:19:52.906: INFO: At 2022-11-20 04:02:52 +0000 UTC - event for rs: {replicaset-controller } SuccessfulCreate: Created pod: rs-npjx9 Nov 20 04:19:52.907: INFO: At 2022-11-20 04:02:52 +0000 UTC - event for rs: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: cpu resource utilization (percentage of request) above target Nov 20 04:19:52.907: INFO: At 2022-11-20 04:02:52 +0000 UTC - event for rs-npjx9: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-8850/rs-npjx9 to capz-conf-clckq Nov 20 04:19:52.907: INFO: At 2022-11-20 04:02:55 +0000 UTC - event for rs-npjx9: {kubelet capz-conf-clckq} Created: Created container rs Nov 20 04:19:52.907: INFO: At 2022-11-20 04:02:55 +0000 UTC - event for rs-npjx9: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 04:19:52.907: INFO: At 2022-11-20 04:02:57 +0000 UTC - event for rs-npjx9: {kubelet capz-conf-clckq} Started: Started container rs Nov 20 04:19:52.907: INFO: At 2022-11-20 04:16:47 +0000 UTC - event for rs-84whx: {kubelet capz-conf-clckq} Killing: Stopping container rs Nov 20 04:19:52.907: INFO: At 2022-11-20 04:16:47 +0000 UTC - event for rs-npjx9: {kubelet capz-conf-clckq} Killing: Stopping container rs Nov 20 04:19:52.907: INFO: At 2022-11-20 04:16:50 +0000 UTC - event for rs-ctrl-98kqr: {kubelet capz-conf-clckq} Killing: Stopping container rs-ctrl Nov 20 04:19:52.935: INFO: POD NODE PHASE GRACE CONDITIONS Nov 20 04:19:52.935: INFO: Nov 20 04:19:52.965: INFO: Logging node info for node capz-conf-clckq Nov 20 04:19:52.994: INFO: Node Info: &Node{ObjectMeta:{capz-conf-clckq 7b0dbe9f-6e88-4c01-99b1-2465612a0daf 25876 0 2022-11-20 01:10:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-clckq kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-95kvw cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.216.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:e4:64:fe volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:10:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:10:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:10:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-20 01:11:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-20 02:47:16 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kubelet.exe Update v1 2022-11-20 04:17:17 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-clckq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 04:17:17 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 04:17:17 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 04:17:17 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 04:17:17 +0000 UTC,LastTransitionTime:2022-11-20 01:10:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-clckq,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-clckq,SystemUUID:14041CED-10D5-4B34-9D4C-344B56A7FFCF,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 04:19:52.994: INFO: Logging kubelet events for node capz-conf-clckq Nov 20 04:19:53.022: INFO: Logging pods the kubelet thinks is on node capz-conf-clckq Nov 20 04:19:53.072: INFO: containerd-logger-g67b6 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.072: INFO: Container containerd-logger ready: true, restart count 0 Nov 20 04:19:53.072: INFO: calico-node-windows-v42gv started at 2022-11-20 01:10:05 +0000 UTC (1+2 container statuses recorded) Nov 20 04:19:53.072: INFO: Init container install-cni ready: true, restart count 0 Nov 20 04:19:53.072: INFO: Container calico-node-felix ready: true, restart count 1 Nov 20 04:19:53.072: INFO: Container calico-node-startup ready: true, restart count 0 Nov 20 04:19:53.073: INFO: csi-proxy-6bzv9 started at 2022-11-20 01:10:37 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.073: INFO: Container csi-proxy ready: true, restart count 0 Nov 20 04:19:53.073: INFO: kube-proxy-windows-g2j89 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.073: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 04:19:53.254: INFO: Latency metrics for node capz-conf-clckq Nov 20 04:19:53.254: INFO: Logging node info for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 04:19:53.283: INFO: Node Info: &Node{ObjectMeta:{capz-conf-fmlvhp-control-plane-b26jb c66af1fa-58b8-4558-8db4-48fd044f3e9e 25802 0 2022-11-20 01:06:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-fmlvhp-control-plane-b26jb kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-2] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-control-plane-vnvbt cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.89.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-20 01:06:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-20 01:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-20 01:07:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-20 04:16:49 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-fmlvhp-control-plane-b26jb,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-20 01:07:40 +0000 UTC,LastTransitionTime:2022-11-20 01:07:40 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 04:16:49 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 04:16:49 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 04:16:49 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 04:16:49 +0000 UTC,LastTransitionTime:2022-11-20 01:07:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-fmlvhp-control-plane-b26jb,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dd4b205b73c3437f8d4072eaa7e987bd,SystemUUID:6f6cc87d-f984-fb40-b2c2-f407cd2b06d2,BootID:db6fac5b-4561-4119-aa40-0dfa37daf137,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:135160275,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:124990265,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:57660216,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 04:19:53.283: INFO: Logging kubelet events for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 04:19:53.311: INFO: Logging pods the kubelet thinks is on node capz-conf-fmlvhp-control-plane-b26jb Nov 20 04:19:53.363: INFO: kube-proxy-grwp5 started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.363: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 04:19:53.363: INFO: calico-node-2d9f6 started at 2022-11-20 01:07:18 +0000 UTC (2+1 container statuses recorded) Nov 20 04:19:53.363: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 20 04:19:53.363: INFO: Init container install-cni ready: true, restart count 0 Nov 20 04:19:53.363: INFO: Container calico-node ready: true, restart count 0 Nov 20 04:19:53.363: INFO: coredns-787d4945fb-w8th2 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.363: INFO: Container coredns ready: true, restart count 0 Nov 20 04:19:53.364: INFO: metrics-server-c9574f845-dwd4x started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.364: INFO: Container metrics-server ready: true, restart count 0 Nov 20 04:19:53.364: INFO: calico-kube-controllers-657b584867-kprw6 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.364: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 20 04:19:53.364: INFO: kube-scheduler-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.364: INFO: Container kube-scheduler ready: true, restart count 0 Nov 20 04:19:53.364: INFO: kube-apiserver-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.364: INFO: Container kube-apiserver ready: true, restart count 0 Nov 20 04:19:53.364: INFO: kube-controller-manager-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.364: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 20 04:19:53.364: INFO: coredns-787d4945fb-jnvnw started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.364: INFO: Container coredns ready: true, restart count 0 Nov 20 04:19:53.364: INFO: etcd-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 04:19:53.364: INFO: Container etcd ready: true, restart count 0 Nov 20 04:19:53.515: INFO: Latency metrics for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 04:19:53.516: INFO: Logging node info for node capz-conf-j95hl Nov 20 04:19:53.544: INFO: Node Info: &Node{ObjectMeta:{capz-conf-j95hl 9874c50e-dbb9-48a0-a3e6-e1158b58eb2b 12781 0 2022-11-20 01:09:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-j95hl kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-9kkk6 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.119.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:a2:fd:fd volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:09:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {manager Update v1 2022-11-20 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-20 02:10:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2022-11-20 02:11:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-20 02:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} }]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-j95hl,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2022-11-20 02:11:06 +0000 UTC,},Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2022-11-20 02:11:11 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-j95hl,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-j95hl,SystemUUID:1EDFC854-811A-4EAC-947F-A7208BD291AA,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 04:19:53.545: INFO: Logging kubelet events for node capz-conf-j95hl Nov 20 04:19:53.573: INFO: Logging pods the kubelet thinks is on node capz-conf-j95hl Nov 20 04:20:23.602: INFO: Unable to retrieve kubelet pods for node capz-conf-j95hl: error trying to reach service: dial tcp 10.1.0.4:10250: i/o timeout [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-8850" for this suite. �[38;5;243m11/20/22 04:20:23.602�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\s\[Feature\:HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[Serial\]\s\[Slow\]\sReplicationController\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sthen\sfrom\s3\spods\sto\s5\spods\sand\sverify\sdecision\sstability$'
test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
------------------------------
This is the Progress Report generated when the timeout occurred:
[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability (Spec Runtime: 2.021s)
test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
In [It] (Node Runtime: 1.816s)
test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
At [By Step] creating replication controller rc in namespace horizontal-pod-autoscaling-6857 (Step Runtime: 1.767s)
test/e2e/framework/rc/rc_utils.go:85
Spec Goroutine
goroutine 12423 [sleep]
time.Sleep(0x2540be400)
/usr/local/go/src/runtime/time.go:195
k8s.io/kubernetes/test/utils.(*RCConfig).start(0xc003ea0200)
test/utils/runners.go:809
k8s.io/kubernetes/test/utils.RunRC({0x0, {0x801de88, 0xc002f92680}, {0x0, 0x0}, {0xc000a41280, 0x36}, {0x0, 0x0, 0x0}, ...})
test/utils/runners.go:538
> k8s.io/kubernetes/test/e2e/framework/rc.RunRC({0x0, {0x801de88, 0xc002f92680}, {0x0, 0x0}, {0xc000a41280, 0x36}, {0x0, 0x0, 0x0}, ...})
test/e2e/framework/rc/rc_utils.go:88
k8s.io/kubernetes/test/e2e/framework/autoscaling.runServiceAndWorkloadForResourceConsumer({0x801de88, 0xc002f92680}, {0x7ff34d8, 0xc001663a40}, {0x7fda248, 0xc000124c90}, {0xc0038b9a60, 0x1f}, {0x75b5632, 0x2}, ...)
test/e2e/framework/autoscaling/autoscaling_utils.go:637
k8s.io/kubernetes/test/e2e/framework/autoscaling.newResourceConsumer({0x75b5632, 0x2}, {0xc0038b9a60, 0x1f}, {{0x0, 0x0}, {0x75b567c, 0x2}, {0x7605c17, 0x15}}, ...)
test/e2e/framework/autoscaling/autoscaling_utils.go:205
k8s.io/kubernetes/test/e2e/framework/autoscaling.NewDynamicResourceConsumer(...)
test/e2e/framework/autoscaling/autoscaling_utils.go:143
> k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00174be68?, {0x75b5632?, 0xc0046293e0?}, {{0x0, 0x0}, {0x75b567c, 0x2}, {0x7605c17, 0x15}}, 0xc000b88f00)
test/e2e/autoscaling/horizontal_pod_autoscaling.go:204
> k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75b5632?, 0x62b7ee5?}, {{0x0, 0x0}, {0x75b567c, 0x2}, {0x7605c17, 0x15}}, {0x75b659b, 0x3}, ...)
test/e2e/autoscaling/horizontal_pod_autoscaling.go:249
> k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.4.1()
test/e2e/autoscaling/horizontal_pod_autoscaling.go:81
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00324a900, 0xc00303bf20})
vendor/github.com/onsi/ginkgo/v2/internal/node.go:449
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738
------------------------------
There were additional failures detected after the initial failure:
[TIMEDOUT]
Timedout
In [AfterEach] at: test/e2e/framework/node/init/init.go:32
This is the Progress Report generated when the timeout occurred:
[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability (Spec Runtime: 32.024s)
test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
In [AfterEach] (Node Runtime: 30.001s)
test/e2e/framework/node/init/init.go:32
At [By Step] creating replication controller rc-ctrl in namespace horizontal-pod-autoscaling-6857 (Step Runtime: 21.643s)
test/e2e/framework/rc/rc_utils.go:85
Spec Goroutine
goroutine 12491 [select]
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00007c080}, 0xc003ecea68, 0x2fdb16a?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00007c080}, 0xc8?, 0x2fd9d05?, 0x20?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00007c080}, 0x75b521a?, 0xc004468e18?, 0x262a967?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76ec49c?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514
k8s.io/kubernetes/test/e2e/framework/node.allNodesReady({0x801de88?, 0xc002f92680}, 0x7fe0b90?)
test/e2e/framework/node/helper.go:122
k8s.io/kubernetes/test/e2e/framework/node.AllNodesReady({0x801de88?, 0xc002f92680?}, 0xc001663f40?)
test/e2e/framework/node/helper.go:108
> k8s.io/kubernetes/test/e2e/framework/node/init.init.0.func1.1()
test/e2e/framework/node/init/init.go:39
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001d1c7e0, 0x0})
vendor/github.com/onsi/ginkgo/v2/internal/node.go:449
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738
Goroutines of Interest
goroutine 12423 [select]
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00007c080}, 0xc001db0ab0, 0x2fdb16a?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00007c080}, 0xb0?, 0x2fd9d05?, 0x10?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00007c080}, 0xc0029f8a70?, 0xc00174bc00?, 0x262a967?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b5632?, 0x2?, 0x75b567c?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514
k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas(0xc001e86dc0, 0x3, 0x3?)
test/e2e/framework/autoscaling/autoscaling_utils.go:478
> k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00174be68, {0x75b5632?, 0xc0046293e0?}, {{0x0, 0x0}, {0x75b567c, 0x2}, {0x7605c17, 0x15}}, 0xc000b88f00)
test/e2e/autoscaling/horizontal_pod_autoscaling.go:209
> k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75b5632?, 0x62b7ee5?}, {{0x0, 0x0}, {0x75b567c, 0x2}, {0x7605c17, 0x15}}, {0x75b659b, 0x3}, ...)
test/e2e/autoscaling/horizontal_pod_autoscaling.go:249
> k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.4.1()
test/e2e/autoscaling/horizontal_pod_autoscaling.go:81
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00324a900, 0xc00303bf20})
vendor/github.com/onsi/ginkgo/v2/internal/node.go:449
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738
----------
[TIMEDOUT]
Timedout
In [DeferCleanup (Each)] at: dump namespaces | framework.go:196
This is the Progress Report generated when the timeout occurred:
[sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability (Spec Runtime: 1m2.026s)
test/e2e/autoscaling/horizontal_pod_autoscaling.go:80
In [DeferCleanup (Each)] (Node Runtime: 30.001s)
dump namespaces | framework.go:196
At [By Step] Found 10 events. (Step Runtime: 29.971s)
test/e2e/framework/debug/dump.go:46
Spec Goroutine
goroutine 12491 [select]
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00007c080}, 0xc003ecea68, 0x2fdb16a?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00007c080}, 0xc8?, 0x2fd9d05?, 0x20?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00007c080}, 0x75b521a?, 0xc004468e18?, 0x262a967?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76ec49c?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514
k8s.io/kubernetes/test/e2e/framework/node.allNodesReady({0x801de88?, 0xc002f92680}, 0x7fe0b90?)
test/e2e/framework/node/helper.go:122
k8s.io/kubernetes/test/e2e/framework/node.AllNodesReady({0x801de88?, 0xc002f92680?}, 0xc001663f40?)
test/e2e/framework/node/helper.go:108
> k8s.io/kubernetes/test/e2e/framework/node/init.init.0.func1.1()
test/e2e/framework/node/init/init.go:39
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc001d1c7e0, 0x0})
vendor/github.com/onsi/ginkgo/v2/internal/node.go:449
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738
Goroutines of Interest
goroutine 12423 [select]
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc00007c080}, 0xc001db0ab0, 0x2fdb16a?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc00007c080}, 0xb0?, 0x2fd9d05?, 0x10?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc00007c080}, 0xc0029f8a70?, 0xc00174bc00?, 0x262a967?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b5632?, 0x2?, 0x75b567c?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514
k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas(0xc001e86dc0, 0x3, 0x3?)
test/e2e/framework/autoscaling/autoscaling_utils.go:478
> k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc00174be68, {0x75b5632?, 0xc0046293e0?}, {{0x0, 0x0}, {0x75b567c, 0x2}, {0x7605c17, 0x15}}, 0xc000b88f00)
test/e2e/autoscaling/horizontal_pod_autoscaling.go:209
> k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75b5632?, 0x62b7ee5?}, {{0x0, 0x0}, {0x75b567c, 0x2}, {0x7605c17, 0x15}}, {0x75b659b, 0x3}, ...)
test/e2e/autoscaling/horizontal_pod_autoscaling.go:249
> k8s.io/kubernetes/test/e2e/autoscaling.glob..func6.4.1()
test/e2e/autoscaling/horizontal_pod_autoscaling.go:81
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00324a900, 0xc00303bf20})
vendor/github.com/onsi/ginkgo/v2/internal/node.go:449
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738
goroutine 12563 [select]
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000904780, 0xc003432c00)
vendor/golang.org/x/net/http2/transport.go:1200
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc000294680, 0xc003432c00, {0xe0?})
vendor/golang.org/x/net/http2/transport.go:519
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...)
vendor/golang.org/x/net/http2/transport.go:480
k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00323e000?}, 0xc003432c00?)
vendor/golang.org/x/net/http2/transport.go:3020
net/http.(*Transport).roundTrip(0xc00323e000, 0xc003432c00)
/usr/local/go/src/net/http/transport.go:540
net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc0034771d0?)
/usr/local/go/src/net/http/roundtrip.go:17
k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0045aa480, 0xc003432b00)
vendor/k8s.io/client-go/transport/round_trippers.go:168
net/http.send(0xc003432b00, {0x7fad100, 0xc0045aa480}, {0x74d54e0?, 0x1?, 0x0?})
/usr/local/go/src/net/http/client.go:251
net/http.(*Client).send(0xc004a72600, 0xc003432b00, {0x7fcafc4f2108?, 0x100?, 0x0?})
/usr/local/go/src/net/http/client.go:175
net/http.(*Client).do(0xc004a72600, 0xc003432b00)
/usr/local/go/src/net/http/client.go:715
net/http.(*Client).Do(...)
/usr/local/go/src/net/http/client.go:581
k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc003432900, {0x7fe0bc8, 0xc00007c088}, 0xc003d85e00?)
vendor/k8s.io/client-go/rest/request.go:964
k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc003432900, {0x7fe0bc8, 0xc00007c088})
vendor/k8s.io/client-go/rest/request.go:1005
> k8s.io/kubernetes/test/e2e/framework/debug.getKubeletPods.func1()
test/e2e/framework/debug/dump.go:154
> k8s.io/kubernetes/test/e2e/framework/debug.getKubeletPods
test/e2e/framework/debug/dump.go:147
goroutine 12525 [select]
> k8s.io/kubernetes/test/e2e/framework/debug.getKubeletPods({0x801de88?, 0xc002f92680}, {0xc000727cd0, 0xf})
test/e2e/framework/debug/dump.go:158
> k8s.io/kubernetes/test/e2e/framework/debug.DumpNodeDebugInfo({0x801de88, 0xc002f92680}, {0xc003106d20?, 0x3, 0xc8?}, 0x78959b8)
test/e2e/framework/debug/dump.go:122
> k8s.io/kubernetes/test/e2e/framework/debug.dumpAllNodeInfo({0x801de88, 0xc002f92680}, 0xc000e5fe30)
test/e2e/framework/debug/dump.go:103
> k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc002f92680}, {0xc0038b9a60, 0x1f})
test/e2e/framework/debug/dump.go:79
> k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00300a650?, {0xc0038b9a60?, 0x7fa7740?})
test/e2e/framework/debug/init/init.go:34
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1()
test/e2e/framework/framework.go:274
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.By({0x7696cac, 0x28}, {0xc00397cc70, 0x1, 0x2?})
vendor/github.com/onsi/ginkgo/v2/core_dsl.go:535
k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000b88f00)
test/e2e/framework/framework.go:271
reflect.Value.call({0x6627cc0?, 0xc001585150?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?})
/usr/local/go/src/reflect/value.go:584
reflect.Value.Call({0x6627cc0?, 0xc001585150?, 0x0?}, {0xae73300?, 0x0?, 0x0?})
/usr/local/go/src/reflect/value.go:368
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.NewCleanupNode.func3()
vendor/github.com/onsi/ginkgo/v2/internal/node.go:571
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc003020900})
vendor/github.com/onsi/ginkgo/v2/internal/node.go:449
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738
from junit.kubetest.01.xml
Find autoscaling mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\s\[Feature\:HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sMemory\)\s\[Serial\]\s\[Slow\]\sDeployment\s\(Pod\sResource\)\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sthen\sfrom\s3\spods\sto\s5\spods\susing\sAverage\sValue\sfor\saggregation$'
test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0017d9e68, {0x75e2251?, 0xc0002566c0?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c8a66, 0xa}}, 0xc000b88ff0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75e2251?, 0x62b7ee5?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c8a66, 0xa}}, {0x75bc0b7, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.1.2() test/e2e/autoscaling/horizontal_pod_autoscaling.go:158 +0x88from junit.kubetest.01.xml
[BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/20/22 01:34:34.8�[0m Nov 20 01:34:34.801: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/20/22 01:34:34.802�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/20/22 01:34:34.89�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/20/22 01:34:34.948�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:31 [It] Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation test/e2e/autoscaling/horizontal_pod_autoscaling.go:157 Nov 20 01:34:35.002: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC test-deployment via apps/v1beta2, Kind=Deployment with 1 replicas �[38;5;243m11/20/22 01:34:35.003�[0m �[1mSTEP:�[0m Creating deployment test-deployment in namespace horizontal-pod-autoscaling-9057 �[38;5;243m11/20/22 01:34:35.044�[0m I1120 01:34:35.080000 15 runners.go:193] Created deployment with name: test-deployment, namespace: horizontal-pod-autoscaling-9057, replica count: 1 I1120 01:34:45.133173 15 runners.go:193] test-deployment Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/20/22 01:34:45.133�[0m �[1mSTEP:�[0m creating replication controller test-deployment-ctrl in namespace horizontal-pod-autoscaling-9057 �[38;5;243m11/20/22 01:34:45.179�[0m I1120 01:34:45.212572 15 runners.go:193] Created replication controller with name: test-deployment-ctrl, namespace: horizontal-pod-autoscaling-9057, replica count: 1 I1120 01:34:55.265181 15 runners.go:193] test-deployment-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 20 01:35:00.266: INFO: Waiting for amount of service:test-deployment-ctrl endpoints to be 1 Nov 20 01:35:00.295: INFO: RC test-deployment: consume 0 millicores in total Nov 20 01:35:00.295: INFO: RC test-deployment: disabling CPU consumption Nov 20 01:35:00.295: INFO: RC test-deployment: consume 250 MB in total Nov 20 01:35:00.295: INFO: RC test-deployment: setting consumption to 250 MB in total Nov 20 01:35:00.295: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:35:00.295: INFO: RC test-deployment: consume custom metric 0 in total Nov 20 01:35:00.295: INFO: RC test-deployment: disabling consumption of custom metric QPS Nov 20 01:35:00.295: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:35:00.359: INFO: waiting for 3 replicas (current: 1) Nov 20 01:35:20.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:35:30.379: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:35:30.379: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:35:40.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:36:00.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:36:00.421: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:36:00.421: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:36:20.392: INFO: waiting for 3 replicas (current: 2) Nov 20 01:36:30.460: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:36:30.460: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:36:40.388: INFO: waiting for 3 replicas (current: 2) Nov 20 01:37:00.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:37:00.512: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:37:00.512: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:37:20.388: INFO: waiting for 3 replicas (current: 2) Nov 20 01:37:30.549: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:37:30.549: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:37:40.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:38:00.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:38:00.588: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:38:00.588: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:38:20.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:38:30.626: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:38:30.626: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:38:40.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:39:00.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:39:00.663: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:39:00.664: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:39:20.388: INFO: waiting for 3 replicas (current: 2) Nov 20 01:39:30.705: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:39:30.705: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:39:40.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:40:00.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:40:00.748: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:40:00.748: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:40:20.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:40:30.786: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:40:30.786: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:40:40.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:41:00.388: INFO: waiting for 3 replicas (current: 2) Nov 20 01:41:00.824: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:41:00.824: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:41:20.391: INFO: waiting for 3 replicas (current: 2) Nov 20 01:41:30.863: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:41:30.863: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:41:40.392: INFO: waiting for 3 replicas (current: 2) Nov 20 01:42:00.388: INFO: waiting for 3 replicas (current: 2) Nov 20 01:42:00.900: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:42:00.901: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:42:20.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:42:30.940: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:42:30.940: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:42:40.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:43:00.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:43:00.978: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:43:00.978: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:43:20.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:43:31.017: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:43:31.017: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:43:40.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:44:00.388: INFO: waiting for 3 replicas (current: 2) Nov 20 01:44:01.057: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:44:01.057: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:44:20.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:44:31.095: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:44:31.095: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:44:40.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:45:00.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:45:01.134: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:45:01.134: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:45:20.388: INFO: waiting for 3 replicas (current: 2) Nov 20 01:45:31.171: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:45:31.171: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:45:40.388: INFO: waiting for 3 replicas (current: 2) Nov 20 01:46:00.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:46:01.210: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:46:01.210: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:46:20.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:46:31.249: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:46:31.249: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:46:40.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:47:00.388: INFO: waiting for 3 replicas (current: 2) Nov 20 01:47:01.287: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:47:01.287: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:47:20.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:47:31.325: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:47:31.325: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:47:40.388: INFO: waiting for 3 replicas (current: 2) Nov 20 01:48:00.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:48:01.369: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:48:01.369: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:48:20.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:48:31.409: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:48:31.409: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:48:40.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:49:00.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:49:01.448: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:49:01.448: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:49:20.390: INFO: waiting for 3 replicas (current: 2) Nov 20 01:49:31.488: INFO: RC test-deployment: sending request to consume 250 MB Nov 20 01:49:31.488: INFO: ConsumeMem URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9057/services/test-deployment-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=250&requestSizeMegabytes=100 } Nov 20 01:49:40.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:50:00.389: INFO: waiting for 3 replicas (current: 2) Nov 20 01:50:00.417: INFO: waiting for 3 replicas (current: 2) Nov 20 01:50:00.417: INFO: Unexpected error: timeout waiting 15m0s for 3 replicas: <*errors.errorString | 0xc0001eb910>: { s: "timed out waiting for the condition", } Nov 20 01:50:00.417: FAIL: timeout waiting 15m0s for 3 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc0017d9e68, {0x75e2251?, 0xc0002566c0?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c8a66, 0xa}}, 0xc000b88ff0) test/e2e/autoscaling/horizontal_pod_autoscaling.go:209 +0x2d8 k8s.io/kubernetes/test/e2e/autoscaling.scaleUp({0x75e2251?, 0x62b7ee5?}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c8a66, 0xa}}, {0x75bc0b7, 0x6}, ...) test/e2e/autoscaling/horizontal_pod_autoscaling.go:249 +0x212 k8s.io/kubernetes/test/e2e/autoscaling.glob..func7.1.2() test/e2e/autoscaling/horizontal_pod_autoscaling.go:158 +0x88 �[1mSTEP:�[0m Removing consuming RC test-deployment �[38;5;243m11/20/22 01:50:00.45�[0m Nov 20 01:50:00.450: INFO: RC test-deployment: stopping metric consumer Nov 20 01:50:00.450: INFO: RC test-deployment: stopping CPU consumer Nov 20 01:50:00.450: INFO: RC test-deployment: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps test-deployment in namespace horizontal-pod-autoscaling-9057, will wait for the garbage collector to delete the pods �[38;5;243m11/20/22 01:50:10.452�[0m Nov 20 01:50:10.564: INFO: Deleting Deployment.apps test-deployment took: 32.126779ms Nov 20 01:50:10.665: INFO: Terminating Deployment.apps test-deployment pods took: 101.601086ms �[1mSTEP:�[0m deleting ReplicationController test-deployment-ctrl in namespace horizontal-pod-autoscaling-9057, will wait for the garbage collector to delete the pods �[38;5;243m11/20/22 01:50:12.835�[0m Nov 20 01:50:12.948: INFO: Deleting ReplicationController test-deployment-ctrl took: 34.46757ms Nov 20 01:50:13.049: INFO: Terminating ReplicationController test-deployment-ctrl pods took: 101.059365ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/node/init/init.go:32 Nov 20 01:50:14.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/20/22 01:50:14.835�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-9057". �[38;5;243m11/20/22 01:50:14.835�[0m �[1mSTEP:�[0m Found 21 events. �[38;5;243m11/20/22 01:50:14.864�[0m Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:35 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 1 Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:35 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-qplsc Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:35 +0000 UTC - event for test-deployment-54fb67b787-qplsc: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-9057/test-deployment-54fb67b787-qplsc to capz-conf-clckq Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:38 +0000 UTC - event for test-deployment-54fb67b787-qplsc: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:38 +0000 UTC - event for test-deployment-54fb67b787-qplsc: {kubelet capz-conf-clckq} Created: Created container test-deployment Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:39 +0000 UTC - event for test-deployment-54fb67b787-qplsc: {kubelet capz-conf-clckq} Started: Started container test-deployment Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:45 +0000 UTC - event for test-deployment-ctrl: {replication-controller } SuccessfulCreate: Created pod: test-deployment-ctrl-j7fzm Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:45 +0000 UTC - event for test-deployment-ctrl-j7fzm: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-9057/test-deployment-ctrl-j7fzm to capz-conf-j95hl Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:47 +0000 UTC - event for test-deployment-ctrl-j7fzm: {kubelet capz-conf-j95hl} Created: Created container test-deployment-ctrl Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:47 +0000 UTC - event for test-deployment-ctrl-j7fzm: {kubelet capz-conf-j95hl} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 20 01:50:14.864: INFO: At 2022-11-20 01:34:48 +0000 UTC - event for test-deployment-ctrl-j7fzm: {kubelet capz-conf-j95hl} Started: Started container test-deployment-ctrl Nov 20 01:50:14.864: INFO: At 2022-11-20 01:35:15 +0000 UTC - event for test-deployment: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 2; reason: memory resource above target Nov 20 01:50:14.864: INFO: At 2022-11-20 01:35:15 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-54fb67b787 to 2 from 1 Nov 20 01:50:14.864: INFO: At 2022-11-20 01:35:15 +0000 UTC - event for test-deployment-54fb67b787: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-54fb67b787-nk9mx Nov 20 01:50:14.864: INFO: At 2022-11-20 01:35:15 +0000 UTC - event for test-deployment-54fb67b787-nk9mx: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-9057/test-deployment-54fb67b787-nk9mx to capz-conf-j95hl Nov 20 01:50:14.864: INFO: At 2022-11-20 01:35:17 +0000 UTC - event for test-deployment-54fb67b787-nk9mx: {kubelet capz-conf-j95hl} Created: Created container test-deployment Nov 20 01:50:14.864: INFO: At 2022-11-20 01:35:17 +0000 UTC - event for test-deployment-54fb67b787-nk9mx: {kubelet capz-conf-j95hl} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 01:50:14.864: INFO: At 2022-11-20 01:35:18 +0000 UTC - event for test-deployment-54fb67b787-nk9mx: {kubelet capz-conf-j95hl} Started: Started container test-deployment Nov 20 01:50:14.864: INFO: At 2022-11-20 01:50:10 +0000 UTC - event for test-deployment-54fb67b787-nk9mx: {kubelet capz-conf-j95hl} Killing: Stopping container test-deployment Nov 20 01:50:14.864: INFO: At 2022-11-20 01:50:10 +0000 UTC - event for test-deployment-54fb67b787-qplsc: {kubelet capz-conf-clckq} Killing: Stopping container test-deployment Nov 20 01:50:14.864: INFO: At 2022-11-20 01:50:12 +0000 UTC - event for test-deployment-ctrl-j7fzm: {kubelet capz-conf-j95hl} Killing: Stopping container test-deployment-ctrl Nov 20 01:50:14.892: INFO: POD NODE PHASE GRACE CONDITIONS Nov 20 01:50:14.892: INFO: Nov 20 01:50:14.922: INFO: Logging node info for node capz-conf-clckq Nov 20 01:50:14.952: INFO: Node Info: &Node{ObjectMeta:{capz-conf-clckq 7b0dbe9f-6e88-4c01-99b1-2465612a0daf 6454 0 2022-11-20 01:10:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-clckq kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-95kvw cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.216.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:e4:64:fe volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:10:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:10:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:10:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-20 01:11:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-20 01:49:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-clckq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 01:49:34 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 01:49:34 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 01:49:34 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 01:49:34 +0000 UTC,LastTransitionTime:2022-11-20 01:10:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-clckq,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-clckq,SystemUUID:14041CED-10D5-4B34-9D4C-344B56A7FFCF,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 01:50:14.952: INFO: Logging kubelet events for node capz-conf-clckq Nov 20 01:50:14.982: INFO: Logging pods the kubelet thinks is on node capz-conf-clckq Nov 20 01:50:15.032: INFO: calico-node-windows-v42gv started at 2022-11-20 01:10:05 +0000 UTC (1+2 container statuses recorded) Nov 20 01:50:15.032: INFO: Init container install-cni ready: true, restart count 0 Nov 20 01:50:15.032: INFO: Container calico-node-felix ready: true, restart count 1 Nov 20 01:50:15.032: INFO: Container calico-node-startup ready: true, restart count 0 Nov 20 01:50:15.032: INFO: containerd-logger-g67b6 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.032: INFO: Container containerd-logger ready: true, restart count 0 Nov 20 01:50:15.032: INFO: csi-proxy-6bzv9 started at 2022-11-20 01:10:37 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.032: INFO: Container csi-proxy ready: true, restart count 0 Nov 20 01:50:15.032: INFO: kube-proxy-windows-g2j89 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.032: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 01:50:15.211: INFO: Latency metrics for node capz-conf-clckq Nov 20 01:50:15.211: INFO: Logging node info for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 01:50:15.241: INFO: Node Info: &Node{ObjectMeta:{capz-conf-fmlvhp-control-plane-b26jb c66af1fa-58b8-4558-8db4-48fd044f3e9e 6391 0 2022-11-20 01:06:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-fmlvhp-control-plane-b26jb kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-2] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-control-plane-vnvbt cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.89.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-20 01:06:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-20 01:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-20 01:07:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-20 01:48:53 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-fmlvhp-control-plane-b26jb,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-20 01:07:40 +0000 UTC,LastTransitionTime:2022-11-20 01:07:40 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 01:48:53 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 01:48:53 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 01:48:53 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 01:48:53 +0000 UTC,LastTransitionTime:2022-11-20 01:07:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-fmlvhp-control-plane-b26jb,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dd4b205b73c3437f8d4072eaa7e987bd,SystemUUID:6f6cc87d-f984-fb40-b2c2-f407cd2b06d2,BootID:db6fac5b-4561-4119-aa40-0dfa37daf137,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:135160275,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:124990265,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:57660216,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 01:50:15.241: INFO: Logging kubelet events for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 01:50:15.272: INFO: Logging pods the kubelet thinks is on node capz-conf-fmlvhp-control-plane-b26jb Nov 20 01:50:15.318: INFO: calico-node-2d9f6 started at 2022-11-20 01:07:18 +0000 UTC (2+1 container statuses recorded) Nov 20 01:50:15.319: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 20 01:50:15.319: INFO: Init container install-cni ready: true, restart count 0 Nov 20 01:50:15.319: INFO: Container calico-node ready: true, restart count 0 Nov 20 01:50:15.319: INFO: coredns-787d4945fb-w8th2 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.319: INFO: Container coredns ready: true, restart count 0 Nov 20 01:50:15.319: INFO: metrics-server-c9574f845-dwd4x started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.319: INFO: Container metrics-server ready: true, restart count 0 Nov 20 01:50:15.319: INFO: calico-kube-controllers-657b584867-kprw6 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.319: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 20 01:50:15.319: INFO: kube-scheduler-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.319: INFO: Container kube-scheduler ready: true, restart count 0 Nov 20 01:50:15.319: INFO: kube-proxy-grwp5 started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.319: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 01:50:15.319: INFO: kube-controller-manager-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.319: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 20 01:50:15.319: INFO: coredns-787d4945fb-jnvnw started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.319: INFO: Container coredns ready: true, restart count 0 Nov 20 01:50:15.319: INFO: etcd-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.319: INFO: Container etcd ready: true, restart count 0 Nov 20 01:50:15.319: INFO: kube-apiserver-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.319: INFO: Container kube-apiserver ready: true, restart count 0 Nov 20 01:50:15.466: INFO: Latency metrics for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 01:50:15.466: INFO: Logging node info for node capz-conf-j95hl Nov 20 01:50:15.494: INFO: Node Info: &Node{ObjectMeta:{capz-conf-j95hl 9874c50e-dbb9-48a0-a3e6-e1158b58eb2b 6470 0 2022-11-20 01:09:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-j95hl kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-9kkk6 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.119.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:a2:fd:fd volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:09:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:09:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-20 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-20 01:49:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-j95hl,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 01:49:44 +0000 UTC,LastTransitionTime:2022-11-20 01:09:13 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 01:49:44 +0000 UTC,LastTransitionTime:2022-11-20 01:09:13 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 01:49:44 +0000 UTC,LastTransitionTime:2022-11-20 01:09:13 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 01:49:44 +0000 UTC,LastTransitionTime:2022-11-20 01:09:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-j95hl,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-j95hl,SystemUUID:1EDFC854-811A-4EAC-947F-A7208BD291AA,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 01:50:15.495: INFO: Logging kubelet events for node capz-conf-j95hl Nov 20 01:50:15.525: INFO: Logging pods the kubelet thinks is on node capz-conf-j95hl Nov 20 01:50:15.571: INFO: calico-node-windows-6xjrh started at 2022-11-20 01:09:14 +0000 UTC (1+2 container statuses recorded) Nov 20 01:50:15.571: INFO: Init container install-cni ready: true, restart count 0 Nov 20 01:50:15.571: INFO: Container calico-node-felix ready: true, restart count 1 Nov 20 01:50:15.571: INFO: Container calico-node-startup ready: true, restart count 0 Nov 20 01:50:15.571: INFO: kube-proxy-windows-p95gh started at 2022-11-20 01:09:14 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.571: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 01:50:15.571: INFO: containerd-logger-hbjlt started at 2022-11-20 01:09:14 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.571: INFO: Container containerd-logger ready: true, restart count 0 Nov 20 01:50:15.571: INFO: csi-proxy-qrf8m started at 2022-11-20 01:30:34 +0000 UTC (0+1 container statuses recorded) Nov 20 01:50:15.571: INFO: Container csi-proxy ready: true, restart count 0 Nov 20 01:50:15.714: INFO: Latency metrics for node capz-conf-j95hl [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-9057" for this suite. �[38;5;243m11/20/22 01:50:15.715�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\s\[Feature\:HPA\]\s\[Serial\]\s\[Slow\]\sHorizontal\spod\sautoscaling\s\(non\-default\sbehavior\)\swith\sautoscaling\sdisabled\sshouldn\'t\sscale\sdown$'
test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:197 k8s.io/kubernetes/test/e2e/autoscaling.glob..func8.3.2() test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:197 +0x485from junit.kubetest.01.xml
[BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/20/22 04:41:40.654�[0m Nov 20 04:41:40.654: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/20/22 04:41:40.655�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/20/22 04:41:40.745�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/20/22 04:41:40.799�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] shouldn't scale down test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:173 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/20/22 04:41:40.853�[0m Nov 20 04:41:40.853: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 3 replicas �[38;5;243m11/20/22 04:41:40.854�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-2194 �[38;5;243m11/20/22 04:41:40.9�[0m I1120 04:41:40.933925 15 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-2194, replica count: 3 I1120 04:41:50.985194 15 runners.go:193] consumer Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/20/22 04:41:50.985�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-2194 �[38;5;243m11/20/22 04:41:51.031�[0m I1120 04:41:51.064050 15 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-2194, replica count: 1 I1120 04:42:01.114620 15 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 20 04:42:06.116: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 20 04:42:06.144: INFO: RC consumer: consume 330 millicores in total Nov 20 04:42:06.144: INFO: RC consumer: setting consumption to 330 millicores in total Nov 20 04:42:06.144: INFO: RC consumer: sending request to consume 330 millicores Nov 20 04:42:06.144: INFO: RC consumer: consume 0 MB in total Nov 20 04:42:06.144: INFO: RC consumer: consume custom metric 0 in total Nov 20 04:42:06.144: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2194/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=330&requestSizeMillicores=100 } Nov 20 04:42:06.144: INFO: RC consumer: disabling mem consumption Nov 20 04:42:06.144: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m trying to trigger scale down �[38;5;243m11/20/22 04:42:06.177�[0m Nov 20 04:42:06.177: INFO: RC consumer: consume 110 millicores in total Nov 20 04:42:06.203: INFO: RC consumer: setting consumption to 110 millicores in total Nov 20 04:42:06.232: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 20 04:42:06.260: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 20 04:42:16.291: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 20 04:42:16.319: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:0 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 20 04:42:26.291: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 20 04:42:26.320: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:<nil> CurrentReplicas:3 DesiredReplicas:0 CurrentCPUUtilizationPercentage:<nil>} Nov 20 04:42:36.204: INFO: RC consumer: sending request to consume 110 millicores Nov 20 04:42:36.204: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2194/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=110&requestSizeMillicores=100 } Nov 20 04:42:36.300: INFO: expecting there to be in [3, 3] replicas (are: 3) Nov 20 04:42:36.332: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-20 04:42:36 +0000 UTC CurrentReplicas:3 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc002c37a90} Nov 20 04:42:46.290: INFO: expecting there to be in [3, 3] replicas (are: 4) Nov 20 04:42:46.318: INFO: HPA status: {ObservedGeneration:<nil> LastScaleTime:2022-11-20 04:42:36 +0000 UTC CurrentReplicas:3 DesiredReplicas:4 CurrentCPUUtilizationPercentage:0xc002c37dc0} Nov 20 04:42:46.318: INFO: Unexpected error: <*errors.errorString | 0xc000edeef0>: { s: "number of replicas above target", } Nov 20 04:42:46.319: FAIL: number of replicas above target Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func8.3.2() test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:197 +0x485 �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/20/22 04:42:46.352�[0m Nov 20 04:42:46.352: INFO: RC consumer: stopping metric consumer Nov 20 04:42:46.352: INFO: RC consumer: stopping CPU consumer Nov 20 04:42:46.352: INFO: RC consumer: stopping mem consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-2194, will wait for the garbage collector to delete the pods �[38;5;243m11/20/22 04:42:56.353�[0m Nov 20 04:42:56.515: INFO: Deleting Deployment.apps consumer took: 32.994773ms Nov 20 04:42:56.615: INFO: Terminating Deployment.apps consumer pods took: 100.333089ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-2194, will wait for the garbage collector to delete the pods �[38;5;243m11/20/22 04:43:00.874�[0m Nov 20 04:43:00.987: INFO: Deleting ReplicationController consumer-ctrl took: 31.856881ms Nov 20 04:43:01.087: INFO: Terminating ReplicationController consumer-ctrl pods took: 100.67772ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 20 04:43:02.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Nov 20 04:43:02.967: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:04.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:06.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:09.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:10.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:12.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:14.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:17.001: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:19.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:20.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:22.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:24.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:26.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:29.002: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:30.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:32.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:34.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:36.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:38.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:40.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:42.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:45.003: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:46.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:49.001: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:50.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:52.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:55.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:57.001: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:43:59.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:00.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:02.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:04.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:06.997: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:09.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:10.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:12.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:15.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:16.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:18.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:20.997: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:22.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:25.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:26.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:28.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:30.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:32.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:34.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:36.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:39.003: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:40.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:42.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:44.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:46.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:49.003: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:50.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:52.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:54.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:56.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:44:59.002: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:00.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:02.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:05.005: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:06.997: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:09.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:10.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:12.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:15.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:16.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:18.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:20.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:22.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:25.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:27.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:28.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:30.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:32.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:34.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:37.000: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:38.997: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:40.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:42.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:44.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:46.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:48.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:51.003: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:53.003: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:54.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:56.999: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:45:59.001: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:46:00.998: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:46:02.997: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 04:46:03.028: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/20/22 04:46:03.028�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-2194". �[38;5;243m11/20/22 04:46:03.028�[0m �[1mSTEP:�[0m Found 35 events. �[38;5;243m11/20/22 04:46:03.059�[0m Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:40 +0000 UTC - event for consumer: {deployment-controller } ScalingReplicaSet: Scaled up replica set consumer-858f58cb45 to 3 Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:40 +0000 UTC - event for consumer-858f58cb45: {replicaset-controller } SuccessfulCreate: Created pod: consumer-858f58cb45-cfb5q Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:41 +0000 UTC - event for consumer-858f58cb45: {replicaset-controller } SuccessfulCreate: Created pod: consumer-858f58cb45-r6xmz Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:41 +0000 UTC - event for consumer-858f58cb45: {replicaset-controller } SuccessfulCreate: Created pod: consumer-858f58cb45-w9kv6 Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:41 +0000 UTC - event for consumer-858f58cb45-cfb5q: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2194/consumer-858f58cb45-cfb5q to capz-conf-clckq Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:41 +0000 UTC - event for consumer-858f58cb45-r6xmz: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2194/consumer-858f58cb45-r6xmz to capz-conf-clckq Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:41 +0000 UTC - event for consumer-858f58cb45-w9kv6: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2194/consumer-858f58cb45-w9kv6 to capz-conf-clckq Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:44 +0000 UTC - event for consumer-858f58cb45-cfb5q: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:44 +0000 UTC - event for consumer-858f58cb45-r6xmz: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:44 +0000 UTC - event for consumer-858f58cb45-w9kv6: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:44 +0000 UTC - event for consumer-858f58cb45-w9kv6: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:45 +0000 UTC - event for consumer-858f58cb45-cfb5q: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:45 +0000 UTC - event for consumer-858f58cb45-r6xmz: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:47 +0000 UTC - event for consumer-858f58cb45-cfb5q: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:47 +0000 UTC - event for consumer-858f58cb45-r6xmz: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:47 +0000 UTC - event for consumer-858f58cb45-w9kv6: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:51 +0000 UTC - event for consumer-ctrl: {replication-controller } SuccessfulCreate: Created pod: consumer-ctrl-hxxf2 Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:51 +0000 UTC - event for consumer-ctrl-hxxf2: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2194/consumer-ctrl-hxxf2 to capz-conf-clckq Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:54 +0000 UTC - event for consumer-ctrl-hxxf2: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:54 +0000 UTC - event for consumer-ctrl-hxxf2: {kubelet capz-conf-clckq} Created: Created container consumer-ctrl Nov 20 04:46:03.060: INFO: At 2022-11-20 04:41:56 +0000 UTC - event for consumer-ctrl-hxxf2: {kubelet capz-conf-clckq} Started: Started container consumer-ctrl Nov 20 04:46:03.060: INFO: At 2022-11-20 04:42:21 +0000 UTC - event for consumer: {horizontal-pod-autoscaler } FailedComputeMetricsReplicas: invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: did not receive metrics for any ready pods Nov 20 04:46:03.060: INFO: At 2022-11-20 04:42:21 +0000 UTC - event for consumer: {horizontal-pod-autoscaler } FailedGetResourceMetric: failed to get cpu utilization: did not receive metrics for any ready pods Nov 20 04:46:03.060: INFO: At 2022-11-20 04:42:36 +0000 UTC - event for consumer: {deployment-controller } ScalingReplicaSet: Scaled up replica set consumer-858f58cb45 to 4 from 3 Nov 20 04:46:03.060: INFO: At 2022-11-20 04:42:36 +0000 UTC - event for consumer: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 4; reason: cpu resource utilization (percentage of request) above target Nov 20 04:46:03.060: INFO: At 2022-11-20 04:42:36 +0000 UTC - event for consumer-858f58cb45: {replicaset-controller } SuccessfulCreate: Created pod: consumer-858f58cb45-c9944 Nov 20 04:46:03.060: INFO: At 2022-11-20 04:42:36 +0000 UTC - event for consumer-858f58cb45-c9944: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2194/consumer-858f58cb45-c9944 to capz-conf-clckq Nov 20 04:46:03.061: INFO: At 2022-11-20 04:42:39 +0000 UTC - event for consumer-858f58cb45-c9944: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 04:46:03.061: INFO: At 2022-11-20 04:42:39 +0000 UTC - event for consumer-858f58cb45-c9944: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 04:46:03.061: INFO: At 2022-11-20 04:42:41 +0000 UTC - event for consumer-858f58cb45-c9944: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 04:46:03.061: INFO: At 2022-11-20 04:42:56 +0000 UTC - event for consumer-858f58cb45-c9944: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 04:46:03.061: INFO: At 2022-11-20 04:42:56 +0000 UTC - event for consumer-858f58cb45-cfb5q: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 04:46:03.061: INFO: At 2022-11-20 04:42:56 +0000 UTC - event for consumer-858f58cb45-r6xmz: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 04:46:03.061: INFO: At 2022-11-20 04:42:56 +0000 UTC - event for consumer-858f58cb45-w9kv6: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 04:46:03.061: INFO: At 2022-11-20 04:43:01 +0000 UTC - event for consumer-ctrl-hxxf2: {kubelet capz-conf-clckq} Killing: Stopping container consumer-ctrl Nov 20 04:46:03.089: INFO: POD NODE PHASE GRACE CONDITIONS Nov 20 04:46:03.089: INFO: Nov 20 04:46:03.119: INFO: Logging node info for node capz-conf-clckq Nov 20 04:46:03.148: INFO: Node Info: &Node{ObjectMeta:{capz-conf-clckq 7b0dbe9f-6e88-4c01-99b1-2465612a0daf 28422 0 2022-11-20 01:10:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-clckq kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-95kvw cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.216.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:e4:64:fe volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:10:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:10:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:10:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-20 01:11:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-20 02:47:16 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kubelet.exe Update v1 2022-11-20 04:42:50 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-clckq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 04:42:50 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 04:42:50 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 04:42:50 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 04:42:50 +0000 UTC,LastTransitionTime:2022-11-20 01:10:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-clckq,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-clckq,SystemUUID:14041CED-10D5-4B34-9D4C-344B56A7FFCF,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 04:46:03.148: INFO: Logging kubelet events for node capz-conf-clckq Nov 20 04:46:03.176: INFO: Logging pods the kubelet thinks is on node capz-conf-clckq Nov 20 04:46:03.226: INFO: csi-proxy-s6pcw started at 2022-11-20 04:31:14 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.226: INFO: Container csi-proxy ready: true, restart count 0 Nov 20 04:46:03.226: INFO: containerd-logger-g67b6 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.226: INFO: Container containerd-logger ready: true, restart count 0 Nov 20 04:46:03.226: INFO: calico-node-windows-v42gv started at 2022-11-20 01:10:05 +0000 UTC (1+2 container statuses recorded) Nov 20 04:46:03.226: INFO: Init container install-cni ready: true, restart count 0 Nov 20 04:46:03.226: INFO: Container calico-node-felix ready: true, restart count 1 Nov 20 04:46:03.226: INFO: Container calico-node-startup ready: true, restart count 0 Nov 20 04:46:03.226: INFO: kube-proxy-windows-g2j89 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.226: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 04:46:03.414: INFO: Latency metrics for node capz-conf-clckq Nov 20 04:46:03.414: INFO: Logging node info for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 04:46:03.444: INFO: Node Info: &Node{ObjectMeta:{capz-conf-fmlvhp-control-plane-b26jb c66af1fa-58b8-4558-8db4-48fd044f3e9e 28349 0 2022-11-20 01:06:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-fmlvhp-control-plane-b26jb kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-2] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-control-plane-vnvbt cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.89.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-20 01:06:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-20 01:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-20 01:07:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-20 04:42:20 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-fmlvhp-control-plane-b26jb,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-20 01:07:40 +0000 UTC,LastTransitionTime:2022-11-20 01:07:40 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 04:42:20 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 04:42:20 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 04:42:20 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 04:42:20 +0000 UTC,LastTransitionTime:2022-11-20 01:07:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-fmlvhp-control-plane-b26jb,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dd4b205b73c3437f8d4072eaa7e987bd,SystemUUID:6f6cc87d-f984-fb40-b2c2-f407cd2b06d2,BootID:db6fac5b-4561-4119-aa40-0dfa37daf137,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:135160275,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:124990265,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:57660216,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 04:46:03.445: INFO: Logging kubelet events for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 04:46:03.474: INFO: Logging pods the kubelet thinks is on node capz-conf-fmlvhp-control-plane-b26jb Nov 20 04:46:03.524: INFO: calico-kube-controllers-657b584867-kprw6 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.524: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 20 04:46:03.524: INFO: kube-scheduler-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.524: INFO: Container kube-scheduler ready: true, restart count 0 Nov 20 04:46:03.524: INFO: kube-proxy-grwp5 started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.524: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 04:46:03.524: INFO: calico-node-2d9f6 started at 2022-11-20 01:07:18 +0000 UTC (2+1 container statuses recorded) Nov 20 04:46:03.524: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 20 04:46:03.524: INFO: Init container install-cni ready: true, restart count 0 Nov 20 04:46:03.524: INFO: Container calico-node ready: true, restart count 0 Nov 20 04:46:03.524: INFO: coredns-787d4945fb-w8th2 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.524: INFO: Container coredns ready: true, restart count 0 Nov 20 04:46:03.524: INFO: metrics-server-c9574f845-dwd4x started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.524: INFO: Container metrics-server ready: true, restart count 0 Nov 20 04:46:03.524: INFO: etcd-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.524: INFO: Container etcd ready: true, restart count 0 Nov 20 04:46:03.524: INFO: kube-apiserver-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.524: INFO: Container kube-apiserver ready: true, restart count 0 Nov 20 04:46:03.524: INFO: kube-controller-manager-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.524: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 20 04:46:03.524: INFO: coredns-787d4945fb-jnvnw started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 04:46:03.524: INFO: Container coredns ready: true, restart count 0 Nov 20 04:46:03.668: INFO: Latency metrics for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 04:46:03.668: INFO: Logging node info for node capz-conf-j95hl Nov 20 04:46:03.697: INFO: Node Info: &Node{ObjectMeta:{capz-conf-j95hl 9874c50e-dbb9-48a0-a3e6-e1158b58eb2b 12781 0 2022-11-20 01:09:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-j95hl kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-9kkk6 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.119.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:a2:fd:fd volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:09:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {manager Update v1 2022-11-20 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-20 02:10:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2022-11-20 02:11:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-20 02:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} }]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-j95hl,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2022-11-20 02:11:06 +0000 UTC,},Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2022-11-20 02:11:11 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-j95hl,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-j95hl,SystemUUID:1EDFC854-811A-4EAC-947F-A7208BD291AA,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 04:46:03.697: INFO: Logging kubelet events for node capz-conf-j95hl Nov 20 04:46:03.726: INFO: Logging pods the kubelet thinks is on node capz-conf-j95hl Nov 20 04:46:33.754: INFO: Unable to retrieve kubelet pods for node capz-conf-j95hl: error trying to reach service: dial tcp 10.1.0.4:10250: i/o timeout [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2194" for this suite. �[38;5;243m11/20/22 04:46:33.754�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\s\[Feature\:HPA\]\s\[Serial\]\s\[Slow\]\sHorizontal\spod\sautoscaling\s\(non\-default\sbehavior\)\swith\sboth\sscale\sup\sand\sdown\scontrols\sconfigured\sshould\skeep\srecommendation\swithin\sthe\srange\swith\sstabilization\swindow\sand\spod\slimit\srate$'
test/e2e/framework/autoscaling/autoscaling_utils.go:640 k8s.io/kubernetes/test/e2e/framework/autoscaling.runServiceAndWorkloadForResourceConsumer({0x801de88, 0xc001407d40}, {0x7ff34d8, 0xc0017337c0}, {0x7fda248, 0xc000fa7e78}, {0xc003edb2e0, 0x1f}, {0x75c2def, 0x8}, ...) test/e2e/framework/autoscaling/autoscaling_utils.go:640 +0x80f k8s.io/kubernetes/test/e2e/framework/autoscaling.newResourceConsumer({0x75c2def, 0x8}, {0xc003edb2e0, 0x1f}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c8a66, 0xa}}, ...) test/e2e/framework/autoscaling/autoscaling_utils.go:205 +0x4b5 k8s.io/kubernetes/test/e2e/framework/autoscaling.NewDynamicResourceConsumer(...) test/e2e/framework/autoscaling/autoscaling_utils.go:143 k8s.io/kubernetes/test/e2e/autoscaling.glob..func8.6.2() test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:455 +0x1ccfrom junit.kubetest.01.xml
[BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/20/22 02:10:03.498�[0m Nov 20 02:10:03.498: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/20/22 02:10:03.499�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/20/22 02:10:03.587�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/20/22 02:10:03.641�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] should keep recommendation within the range with stabilization window and pod limit rate test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:447 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/20/22 02:10:03.696�[0m Nov 20 02:10:03.696: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 2 replicas �[38;5;243m11/20/22 02:10:03.697�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-5914 �[38;5;243m11/20/22 02:10:03.741�[0m I1120 02:10:03.777422 15 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-5914, replica count: 2 I1120 02:10:13.829054 15 runners.go:193] consumer Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:10:23.829814 15 runners.go:193] consumer Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:10:33.830479 15 runners.go:193] consumer Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:10:43.831503 15 runners.go:193] consumer Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:10:53.832017 15 runners.go:193] consumer Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:11:03.832384 15 runners.go:193] consumer Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:11:13.832635 15 runners.go:193] consumer Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:11:23.832777 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:11:33.833847 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:11:43.834958 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:11:53.836203 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:12:03.836439 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:12:13.837051 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:12:23.837769 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:12:33.838025 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:12:43.838742 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:12:53.839357 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:13:03.840608 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:13:13.841421 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:13:23.841743 15 runners.go:193] consumer Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:13:23.871986 15 runners.go:193] Pod consumer-858f58cb45-sgzhz capz-conf-clckq Running <nil> I1120 02:13:23.872060 15 runners.go:193] Pod consumer-858f58cb45-vl958 capz-conf-j95hl Pending <nil> Nov 20 02:13:23.872: INFO: Unexpected error: <*errors.errorString | 0xc00060e1e0>: { s: "only 1 pods started out of 2", } Nov 20 02:13:23.872: FAIL: only 1 pods started out of 2 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/autoscaling.runServiceAndWorkloadForResourceConsumer({0x801de88, 0xc001407d40}, {0x7ff34d8, 0xc0017337c0}, {0x7fda248, 0xc000fa7e78}, {0xc003edb2e0, 0x1f}, {0x75c2def, 0x8}, ...) test/e2e/framework/autoscaling/autoscaling_utils.go:640 +0x80f k8s.io/kubernetes/test/e2e/framework/autoscaling.newResourceConsumer({0x75c2def, 0x8}, {0xc003edb2e0, 0x1f}, {{0x75b7352, 0x4}, {0x75c0585, 0x7}, {0x75c8a66, 0xa}}, ...) test/e2e/framework/autoscaling/autoscaling_utils.go:205 +0x4b5 k8s.io/kubernetes/test/e2e/framework/autoscaling.NewDynamicResourceConsumer(...) test/e2e/framework/autoscaling/autoscaling_utils.go:143 k8s.io/kubernetes/test/e2e/autoscaling.glob..func8.6.2() test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:455 +0x1cc [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 20 02:13:23.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Nov 20 02:13:23.903: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:25.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:27.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:29.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:31.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:33.936: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:35.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:37.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:39.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:41.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:43.936: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:45.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:47.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:49.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:51.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:53.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:55.936: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:57.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:13:59.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:01.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:03.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:05.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:07.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:09.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:11.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:13.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:15.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:17.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:19.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:21.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:23.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:25.936: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:27.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:29.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:31.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:33.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:35.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:37.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:39.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:41.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:43.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:45.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:47.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:49.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:51.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:53.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:55.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:57.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:14:59.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:01.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:03.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:05.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:07.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:09.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:11.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:13.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:15.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:17.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:19.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:21.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:23.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:25.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:27.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:29.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:31.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:33.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:35.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:37.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:39.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:41.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:43.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:45.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:47.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:49.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:51.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:53.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:55.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:57.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:15:59.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:01.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:03.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:05.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:07.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:09.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:11.935: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:13.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:15.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:17.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:19.933: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:21.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:23.934: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:16:23.964: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/20/22 02:16:23.965�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-5914". �[38;5;243m11/20/22 02:16:23.965�[0m �[1mSTEP:�[0m Found 14 events. �[38;5;243m11/20/22 02:16:23.994�[0m Nov 20 02:16:23.995: INFO: At 2022-11-20 02:10:03 +0000 UTC - event for consumer: {deployment-controller } ScalingReplicaSet: Scaled up replica set consumer-858f58cb45 to 2 Nov 20 02:16:23.995: INFO: At 2022-11-20 02:10:03 +0000 UTC - event for consumer-858f58cb45: {replicaset-controller } SuccessfulCreate: Created pod: consumer-858f58cb45-sgzhz Nov 20 02:16:23.995: INFO: At 2022-11-20 02:10:03 +0000 UTC - event for consumer-858f58cb45: {replicaset-controller } SuccessfulCreate: Created pod: consumer-858f58cb45-vl958 Nov 20 02:16:23.995: INFO: At 2022-11-20 02:10:03 +0000 UTC - event for consumer-858f58cb45-sgzhz: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-5914/consumer-858f58cb45-sgzhz to capz-conf-clckq Nov 20 02:16:23.995: INFO: At 2022-11-20 02:10:03 +0000 UTC - event for consumer-858f58cb45-vl958: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-5914/consumer-858f58cb45-vl958 to capz-conf-j95hl Nov 20 02:16:23.995: INFO: At 2022-11-20 02:10:35 +0000 UTC - event for consumer-858f58cb45-sgzhz: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:16:23.995: INFO: At 2022-11-20 02:10:36 +0000 UTC - event for consumer-858f58cb45-sgzhz: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:16:23.995: INFO: At 2022-11-20 02:11:09 +0000 UTC - event for consumer-858f58cb45-sgzhz: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:16:23.995: INFO: At 2022-11-20 02:16:11 +0000 UTC - event for consumer-858f58cb45: {replicaset-controller } SuccessfulCreate: Created pod: consumer-858f58cb45-4ppj7 Nov 20 02:16:23.995: INFO: At 2022-11-20 02:16:11 +0000 UTC - event for consumer-858f58cb45-4ppj7: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-5914/consumer-858f58cb45-4ppj7 to capz-conf-clckq Nov 20 02:16:23.995: INFO: At 2022-11-20 02:16:11 +0000 UTC - event for consumer-858f58cb45-vl958: {taint-controller } TaintManagerEviction: Marking for deletion Pod horizontal-pod-autoscaling-5914/consumer-858f58cb45-vl958 Nov 20 02:16:23.995: INFO: At 2022-11-20 02:16:14 +0000 UTC - event for consumer-858f58cb45-4ppj7: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:16:23.995: INFO: At 2022-11-20 02:16:14 +0000 UTC - event for consumer-858f58cb45-4ppj7: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:16:23.995: INFO: At 2022-11-20 02:16:17 +0000 UTC - event for consumer-858f58cb45-4ppj7: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:16:24.024: INFO: POD NODE PHASE GRACE CONDITIONS Nov 20 02:16:24.024: INFO: consumer-858f58cb45-4ppj7 capz-conf-clckq Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:16:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:16:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:16:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:16:11 +0000 UTC }] Nov 20 02:16:24.024: INFO: consumer-858f58cb45-sgzhz capz-conf-clckq Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:10:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:11:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:11:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:10:03 +0000 UTC }] Nov 20 02:16:24.024: INFO: consumer-858f58cb45-vl958 capz-conf-j95hl Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:10:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:10:03 +0000 UTC ContainersNotReady containers with unready status: [consumer]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:10:03 +0000 UTC ContainersNotReady containers with unready status: [consumer]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:10:03 +0000 UTC } {DisruptionTarget True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:16:11 +0000 UTC DeletionByTaintManager Taint manager: deleting due to NoExecute taint}] Nov 20 02:16:24.025: INFO: Nov 20 02:16:54.148: INFO: Unable to fetch horizontal-pod-autoscaling-5914/consumer-858f58cb45-vl958/consumer logs: an error on the server ("unknown") has prevented the request from succeeding (get pods consumer-858f58cb45-vl958) Nov 20 02:16:54.255: INFO: Logging node info for node capz-conf-clckq Nov 20 02:16:54.309: INFO: Node Info: &Node{ObjectMeta:{capz-conf-clckq 7b0dbe9f-6e88-4c01-99b1-2465612a0daf 13090 0 2022-11-20 01:10:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-clckq kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-95kvw cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.216.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:e4:64:fe volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:10:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:10:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:10:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-20 01:11:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-20 02:08:43 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kubelet.exe Update v1 2022-11-20 02:13:54 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-clckq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 02:13:54 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 02:13:54 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 02:13:54 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 02:13:54 +0000 UTC,LastTransitionTime:2022-11-20 01:10:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-clckq,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-clckq,SystemUUID:14041CED-10D5-4B34-9D4C-344B56A7FFCF,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 02:16:54.310: INFO: Logging kubelet events for node capz-conf-clckq Nov 20 02:16:54.339: INFO: Logging pods the kubelet thinks is on node capz-conf-clckq Nov 20 02:16:54.400: INFO: calico-node-windows-v42gv started at 2022-11-20 01:10:05 +0000 UTC (1+2 container statuses recorded) Nov 20 02:16:54.400: INFO: Init container install-cni ready: true, restart count 0 Nov 20 02:16:54.400: INFO: Container calico-node-felix ready: true, restart count 1 Nov 20 02:16:54.400: INFO: Container calico-node-startup ready: true, restart count 0 Nov 20 02:16:54.400: INFO: csi-proxy-6bzv9 started at 2022-11-20 01:10:37 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.400: INFO: Container csi-proxy ready: true, restart count 0 Nov 20 02:16:54.400: INFO: kube-proxy-windows-g2j89 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.400: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 02:16:54.400: INFO: consumer-858f58cb45-sgzhz started at 2022-11-20 02:10:03 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.400: INFO: Container consumer ready: true, restart count 0 Nov 20 02:16:54.400: INFO: consumer-858f58cb45-4ppj7 started at 2022-11-20 02:16:11 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.400: INFO: Container consumer ready: true, restart count 0 Nov 20 02:16:54.400: INFO: containerd-logger-g67b6 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.400: INFO: Container containerd-logger ready: true, restart count 0 Nov 20 02:16:54.568: INFO: Latency metrics for node capz-conf-clckq Nov 20 02:16:54.568: INFO: Logging node info for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:16:54.598: INFO: Node Info: &Node{ObjectMeta:{capz-conf-fmlvhp-control-plane-b26jb c66af1fa-58b8-4558-8db4-48fd044f3e9e 13132 0 2022-11-20 01:06:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-fmlvhp-control-plane-b26jb kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-2] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-control-plane-vnvbt cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.89.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-20 01:06:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-20 01:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-20 01:07:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-20 02:14:22 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-fmlvhp-control-plane-b26jb,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-20 01:07:40 +0000 UTC,LastTransitionTime:2022-11-20 01:07:40 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 02:14:22 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 02:14:22 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 02:14:22 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 02:14:22 +0000 UTC,LastTransitionTime:2022-11-20 01:07:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-fmlvhp-control-plane-b26jb,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dd4b205b73c3437f8d4072eaa7e987bd,SystemUUID:6f6cc87d-f984-fb40-b2c2-f407cd2b06d2,BootID:db6fac5b-4561-4119-aa40-0dfa37daf137,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:135160275,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:124990265,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:57660216,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 02:16:54.598: INFO: Logging kubelet events for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:16:54.627: INFO: Logging pods the kubelet thinks is on node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:16:54.674: INFO: kube-scheduler-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.674: INFO: Container kube-scheduler ready: true, restart count 0 Nov 20 02:16:54.674: INFO: kube-proxy-grwp5 started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.674: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 02:16:54.674: INFO: calico-node-2d9f6 started at 2022-11-20 01:07:18 +0000 UTC (2+1 container statuses recorded) Nov 20 02:16:54.674: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 20 02:16:54.674: INFO: Init container install-cni ready: true, restart count 0 Nov 20 02:16:54.674: INFO: Container calico-node ready: true, restart count 0 Nov 20 02:16:54.674: INFO: coredns-787d4945fb-w8th2 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.674: INFO: Container coredns ready: true, restart count 0 Nov 20 02:16:54.674: INFO: metrics-server-c9574f845-dwd4x started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.674: INFO: Container metrics-server ready: true, restart count 0 Nov 20 02:16:54.674: INFO: calico-kube-controllers-657b584867-kprw6 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.674: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 20 02:16:54.674: INFO: etcd-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.674: INFO: Container etcd ready: true, restart count 0 Nov 20 02:16:54.674: INFO: kube-apiserver-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.674: INFO: Container kube-apiserver ready: true, restart count 0 Nov 20 02:16:54.674: INFO: kube-controller-manager-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.674: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 20 02:16:54.674: INFO: coredns-787d4945fb-jnvnw started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:16:54.674: INFO: Container coredns ready: true, restart count 0 Nov 20 02:16:54.818: INFO: Latency metrics for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:16:54.818: INFO: Logging node info for node capz-conf-j95hl Nov 20 02:16:54.847: INFO: Node Info: &Node{ObjectMeta:{capz-conf-j95hl 9874c50e-dbb9-48a0-a3e6-e1158b58eb2b 12781 0 2022-11-20 01:09:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-j95hl kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-9kkk6 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.119.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:a2:fd:fd volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:09:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {manager Update v1 2022-11-20 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-20 02:10:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2022-11-20 02:11:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-20 02:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} }]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-j95hl,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2022-11-20 02:11:06 +0000 UTC,},Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2022-11-20 02:11:11 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-j95hl,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-j95hl,SystemUUID:1EDFC854-811A-4EAC-947F-A7208BD291AA,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 02:16:54.847: INFO: Logging kubelet events for node capz-conf-j95hl Nov 20 02:16:54.875: INFO: Logging pods the kubelet thinks is on node capz-conf-j95hl Nov 20 02:17:24.904: INFO: Unable to retrieve kubelet pods for node capz-conf-j95hl: error trying to reach service: dial tcp 10.1.0.4:10250: i/o timeout [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-5914" for this suite. �[38;5;243m11/20/22 02:17:24.904�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-autoscaling\]\s\[Feature\:HPA\]\s\[Serial\]\s\[Slow\]\sHorizontal\spod\sautoscaling\s\(non\-default\sbehavior\)\swith\sscale\slimited\sby\snumber\sof\sPods\srate\sshould\sscale\sdown\sno\smore\sthan\sgiven\snumber\sof\sPods\sper\sminute$'
test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:282 k8s.io/kubernetes/test/e2e/autoscaling.glob..func8.4.2() test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:282 +0x4f4from junit.kubetest.01.xml
[BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/20/22 02:25:47.003�[0m Nov 20 02:25:47.003: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename horizontal-pod-autoscaling �[38;5;243m11/20/22 02:25:47.004�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/20/22 02:25:47.098�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/20/22 02:25:47.153�[0m [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:31 [It] should scale down no more than given number of Pods per minute test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:258 �[1mSTEP:�[0m setting up resource consumer and HPA �[38;5;243m11/20/22 02:25:47.208�[0m Nov 20 02:25:47.208: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Running consuming RC consumer via apps/v1beta2, Kind=Deployment with 3 replicas �[38;5;243m11/20/22 02:25:47.209�[0m �[1mSTEP:�[0m Creating deployment consumer in namespace horizontal-pod-autoscaling-2274 �[38;5;243m11/20/22 02:25:47.255�[0m I1120 02:25:47.289152 15 runners.go:193] Created deployment with name: consumer, namespace: horizontal-pod-autoscaling-2274, replica count: 3 I1120 02:25:57.340639 15 runners.go:193] consumer Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1120 02:26:07.341847 15 runners.go:193] consumer Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m Running controller �[38;5;243m11/20/22 02:26:07.341�[0m �[1mSTEP:�[0m creating replication controller consumer-ctrl in namespace horizontal-pod-autoscaling-2274 �[38;5;243m11/20/22 02:26:07.385�[0m I1120 02:26:07.419481 15 runners.go:193] Created replication controller with name: consumer-ctrl, namespace: horizontal-pod-autoscaling-2274, replica count: 1 I1120 02:26:17.474458 15 runners.go:193] consumer-ctrl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 20 02:26:22.477: INFO: Waiting for amount of service:consumer-ctrl endpoints to be 1 Nov 20 02:26:22.512: INFO: RC consumer: consume 135 millicores in total Nov 20 02:26:22.512: INFO: RC consumer: setting consumption to 135 millicores in total Nov 20 02:26:22.512: INFO: RC consumer: sending request to consume 135 millicores Nov 20 02:26:22.512: INFO: RC consumer: consume 0 MB in total Nov 20 02:26:22.512: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2274/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=135&requestSizeMillicores=100 } Nov 20 02:26:22.512: INFO: RC consumer: consume custom metric 0 in total Nov 20 02:26:22.512: INFO: RC consumer: disabling mem consumption Nov 20 02:26:22.512: INFO: RC consumer: disabling consumption of custom metric QPS �[1mSTEP:�[0m triggering scale down by lowering consumption �[38;5;243m11/20/22 02:26:22.547�[0m Nov 20 02:26:22.547: INFO: RC consumer: consume 45 millicores in total Nov 20 02:26:22.670: INFO: RC consumer: setting consumption to 45 millicores in total Nov 20 02:26:22.703: INFO: waiting for 2 replicas (current: 3) Nov 20 02:26:42.738: INFO: waiting for 2 replicas (current: 3) Nov 20 02:26:52.670: INFO: RC consumer: sending request to consume 45 millicores Nov 20 02:26:52.670: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2274/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=45&requestSizeMillicores=100 } Nov 20 02:27:02.734: INFO: waiting for 2 replicas (current: 3) Nov 20 02:27:22.734: INFO: waiting for 2 replicas (current: 7) Nov 20 02:27:22.739: INFO: RC consumer: sending request to consume 45 millicores Nov 20 02:27:22.739: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2274/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=45&requestSizeMillicores=100 } Nov 20 02:27:42.735: INFO: waiting for 2 replicas (current: 7) Nov 20 02:27:53.232: INFO: RC consumer: sending request to consume 45 millicores Nov 20 02:27:53.232: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2274/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=45&requestSizeMillicores=100 } Nov 20 02:28:02.740: INFO: waiting for 2 replicas (current: 9) Nov 20 02:28:22.738: INFO: waiting for 2 replicas (current: 10) Nov 20 02:28:23.273: INFO: RC consumer: sending request to consume 45 millicores Nov 20 02:28:23.273: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2274/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=45&requestSizeMillicores=100 } Nov 20 02:28:42.739: INFO: waiting for 2 replicas (current: 10) Nov 20 02:28:53.313: INFO: RC consumer: sending request to consume 45 millicores Nov 20 02:28:53.313: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2274/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=45&requestSizeMillicores=100 } Nov 20 02:29:02.737: INFO: waiting for 2 replicas (current: 10) Nov 20 02:29:22.735: INFO: waiting for 2 replicas (current: 9) Nov 20 02:29:23.355: INFO: RC consumer: sending request to consume 45 millicores Nov 20 02:29:23.355: INFO: ConsumeCPU URL: {https capz-conf-fmlvhp-e459aeb1.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-2274/services/consumer-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=45&requestSizeMillicores=100 } Nov 20 02:29:42.737: INFO: waiting for 2 replicas (current: 9) Nov 20 02:29:52.739: INFO: waiting for 2 replicas (current: 9) Nov 20 02:29:52.739: INFO: Unexpected error: timeout waiting 3m30s for 2 replicas: <*errors.errorString | 0xc0001eb910>: { s: "timed out waiting for the condition", } Nov 20 02:29:52.739: FAIL: timeout waiting 3m30s for 2 replicas: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/autoscaling.glob..func8.4.2() test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:282 +0x4f4 �[1mSTEP:�[0m Removing consuming RC consumer �[38;5;243m11/20/22 02:29:52.778�[0m Nov 20 02:29:52.778: INFO: RC consumer: stopping metric consumer Nov 20 02:29:52.778: INFO: RC consumer: stopping mem consumer Nov 20 02:29:52.778: INFO: RC consumer: stopping CPU consumer �[1mSTEP:�[0m deleting Deployment.apps consumer in namespace horizontal-pod-autoscaling-2274, will wait for the garbage collector to delete the pods �[38;5;243m11/20/22 02:30:02.779�[0m Nov 20 02:30:03.042: INFO: Deleting Deployment.apps consumer took: 32.45188ms Nov 20 02:30:03.143: INFO: Terminating Deployment.apps consumer pods took: 101.189705ms �[1mSTEP:�[0m deleting ReplicationController consumer-ctrl in namespace horizontal-pod-autoscaling-2274, will wait for the garbage collector to delete the pods �[38;5;243m11/20/22 02:30:13.095�[0m Nov 20 02:30:13.212: INFO: Deleting ReplicationController consumer-ctrl took: 32.296093ms Nov 20 02:30:13.313: INFO: Terminating ReplicationController consumer-ctrl pods took: 101.24262ms [AfterEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/node/init/init.go:32 Nov 20 02:30:15.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Nov 20 02:30:15.700: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:17.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:19.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:21.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:23.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:25.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:27.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:29.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:31.747: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:33.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:35.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:37.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:39.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:41.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:43.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:45.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:47.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:49.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:51.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:53.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:55.733: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:57.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:30:59.733: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:01.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:03.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:05.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:07.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:09.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:11.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:13.734: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:15.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:17.733: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:19.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:21.733: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:23.733: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:25.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:27.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:29.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:31.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:33.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:35.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:37.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:39.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:41.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:43.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:45.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:47.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:49.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:51.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:53.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:55.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:57.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:31:59.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:01.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:03.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:05.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:07.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:09.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:11.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:13.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:15.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:17.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:19.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:21.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:23.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:25.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:27.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:29.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:31.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:33.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:35.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:37.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:39.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:41.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:43.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:45.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:47.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:49.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:51.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:53.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:55.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:57.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:32:59.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:33:01.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:33:03.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:33:05.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:33:07.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:33:09.732: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:33:11.735: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:33:13.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:33:15.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:33:15.761: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/20/22 02:33:15.762�[0m �[1mSTEP:�[0m Collecting events from namespace "horizontal-pod-autoscaling-2274". �[38;5;243m11/20/22 02:33:15.762�[0m �[1mSTEP:�[0m Found 76 events. �[38;5;243m11/20/22 02:33:15.796�[0m Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:47 +0000 UTC - event for consumer: {deployment-controller } ScalingReplicaSet: Scaled up replica set consumer-74c57b48f to 3 Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:47 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulCreate: Created pod: consumer-74c57b48f-b8sv8 Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:47 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulCreate: Created pod: consumer-74c57b48f-4db6g Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:47 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulCreate: Created pod: consumer-74c57b48f-tbsfp Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:47 +0000 UTC - event for consumer-74c57b48f-4db6g: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-74c57b48f-4db6g to capz-conf-clckq Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:47 +0000 UTC - event for consumer-74c57b48f-b8sv8: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-74c57b48f-b8sv8 to capz-conf-clckq Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:47 +0000 UTC - event for consumer-74c57b48f-tbsfp: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-74c57b48f-tbsfp to capz-conf-clckq Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:51 +0000 UTC - event for consumer-74c57b48f-b8sv8: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:51 +0000 UTC - event for consumer-74c57b48f-tbsfp: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:52 +0000 UTC - event for consumer-74c57b48f-4db6g: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:52 +0000 UTC - event for consumer-74c57b48f-4db6g: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:52 +0000 UTC - event for consumer-74c57b48f-b8sv8: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:52 +0000 UTC - event for consumer-74c57b48f-tbsfp: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:57 +0000 UTC - event for consumer-74c57b48f-tbsfp: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:58 +0000 UTC - event for consumer-74c57b48f-4db6g: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:33:15.796: INFO: At 2022-11-20 02:25:59 +0000 UTC - event for consumer-74c57b48f-b8sv8: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:33:15.796: INFO: At 2022-11-20 02:26:07 +0000 UTC - event for consumer-ctrl: {replication-controller } SuccessfulCreate: Created pod: consumer-ctrl-bkxpw Nov 20 02:33:15.796: INFO: At 2022-11-20 02:26:07 +0000 UTC - event for consumer-ctrl-bkxpw: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-ctrl-bkxpw to capz-conf-clckq Nov 20 02:33:15.796: INFO: At 2022-11-20 02:26:10 +0000 UTC - event for consumer-ctrl-bkxpw: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 20 02:33:15.796: INFO: At 2022-11-20 02:26:10 +0000 UTC - event for consumer-ctrl-bkxpw: {kubelet capz-conf-clckq} Created: Created container consumer-ctrl Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:12 +0000 UTC - event for consumer-ctrl-bkxpw: {kubelet capz-conf-clckq} Started: Started container consumer-ctrl Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:37 +0000 UTC - event for consumer: {horizontal-pod-autoscaler } FailedComputeMetricsReplicas: invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:37 +0000 UTC - event for consumer: {horizontal-pod-autoscaler } FailedGetResourceMetric: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:52 +0000 UTC - event for consumer: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 7; reason: cpu resource utilization (percentage of request) above target Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:52 +0000 UTC - event for consumer: {deployment-controller } ScalingReplicaSet: Scaled up replica set consumer-74c57b48f to 7 from 3 Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:52 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulCreate: Created pod: consumer-74c57b48f-q7mqm Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:52 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulCreate: Created pod: consumer-74c57b48f-ll6lh Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:52 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulCreate: Created pod: consumer-74c57b48f-lzlkt Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:52 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulCreate: Created pod: consumer-74c57b48f-c294b Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:52 +0000 UTC - event for consumer-74c57b48f-c294b: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-74c57b48f-c294b to capz-conf-clckq Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:52 +0000 UTC - event for consumer-74c57b48f-ll6lh: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-74c57b48f-ll6lh to capz-conf-clckq Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:52 +0000 UTC - event for consumer-74c57b48f-lzlkt: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-74c57b48f-lzlkt to capz-conf-clckq Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:52 +0000 UTC - event for consumer-74c57b48f-q7mqm: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-74c57b48f-q7mqm to capz-conf-clckq Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:56 +0000 UTC - event for consumer-74c57b48f-ll6lh: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:56 +0000 UTC - event for consumer-74c57b48f-ll6lh: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:56 +0000 UTC - event for consumer-74c57b48f-q7mqm: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:57 +0000 UTC - event for consumer-74c57b48f-c294b: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:57 +0000 UTC - event for consumer-74c57b48f-lzlkt: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:57 +0000 UTC - event for consumer-74c57b48f-lzlkt: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:57 +0000 UTC - event for consumer-74c57b48f-q7mqm: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:33:15.797: INFO: At 2022-11-20 02:26:58 +0000 UTC - event for consumer-74c57b48f-c294b: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:02 +0000 UTC - event for consumer-74c57b48f-ll6lh: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:02 +0000 UTC - event for consumer-74c57b48f-q7mqm: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:03 +0000 UTC - event for consumer-74c57b48f-c294b: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:03 +0000 UTC - event for consumer-74c57b48f-lzlkt: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:52 +0000 UTC - event for consumer: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 10; reason: cpu resource utilization (percentage of request) above target Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:52 +0000 UTC - event for consumer: {deployment-controller } ScalingReplicaSet: Scaled up replica set consumer-74c57b48f to 10 from 7 Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:52 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulCreate: Created pod: consumer-74c57b48f-zv6hr Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:52 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulCreate: (combined from similar events): Created pod: consumer-74c57b48f-td7b4 Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:52 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulCreate: Created pod: consumer-74c57b48f-f85xd Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:52 +0000 UTC - event for consumer-74c57b48f-f85xd: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-74c57b48f-f85xd to capz-conf-clckq Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:52 +0000 UTC - event for consumer-74c57b48f-td7b4: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-74c57b48f-td7b4 to capz-conf-clckq Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:52 +0000 UTC - event for consumer-74c57b48f-zv6hr: {default-scheduler } Scheduled: Successfully assigned horizontal-pod-autoscaling-2274/consumer-74c57b48f-zv6hr to capz-conf-clckq Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:56 +0000 UTC - event for consumer-74c57b48f-f85xd: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:33:15.797: INFO: At 2022-11-20 02:27:56 +0000 UTC - event for consumer-74c57b48f-td7b4: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:27:56 +0000 UTC - event for consumer-74c57b48f-td7b4: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:33:15.798: INFO: At 2022-11-20 02:27:56 +0000 UTC - event for consumer-74c57b48f-zv6hr: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/e2e-test-images/resource-consumer:1.13" already present on machine Nov 20 02:33:15.798: INFO: At 2022-11-20 02:27:56 +0000 UTC - event for consumer-74c57b48f-zv6hr: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:27:57 +0000 UTC - event for consumer-74c57b48f-f85xd: {kubelet capz-conf-clckq} Created: Created container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:28:01 +0000 UTC - event for consumer-74c57b48f-f85xd: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:28:01 +0000 UTC - event for consumer-74c57b48f-zv6hr: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:28:03 +0000 UTC - event for consumer-74c57b48f-td7b4: {kubelet capz-conf-clckq} Started: Started container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:29:07 +0000 UTC - event for consumer: {deployment-controller } ScalingReplicaSet: Scaled down replica set consumer-74c57b48f to 9 from 10 Nov 20 02:33:15.798: INFO: At 2022-11-20 02:29:07 +0000 UTC - event for consumer: {horizontal-pod-autoscaler } SuccessfulRescale: New size: 9; reason: All metrics below target Nov 20 02:33:15.798: INFO: At 2022-11-20 02:29:07 +0000 UTC - event for consumer-74c57b48f: {replicaset-controller } SuccessfulDelete: Deleted pod: consumer-74c57b48f-td7b4 Nov 20 02:33:15.798: INFO: At 2022-11-20 02:29:07 +0000 UTC - event for consumer-74c57b48f-td7b4: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:30:03 +0000 UTC - event for consumer-74c57b48f-4db6g: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:30:03 +0000 UTC - event for consumer-74c57b48f-b8sv8: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:30:03 +0000 UTC - event for consumer-74c57b48f-c294b: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:30:03 +0000 UTC - event for consumer-74c57b48f-f85xd: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:30:03 +0000 UTC - event for consumer-74c57b48f-ll6lh: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:30:03 +0000 UTC - event for consumer-74c57b48f-lzlkt: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:30:03 +0000 UTC - event for consumer-74c57b48f-q7mqm: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:30:03 +0000 UTC - event for consumer-74c57b48f-tbsfp: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:30:03 +0000 UTC - event for consumer-74c57b48f-zv6hr: {kubelet capz-conf-clckq} Killing: Stopping container consumer Nov 20 02:33:15.798: INFO: At 2022-11-20 02:30:13 +0000 UTC - event for consumer-ctrl-bkxpw: {kubelet capz-conf-clckq} Killing: Stopping container consumer-ctrl Nov 20 02:33:15.826: INFO: POD NODE PHASE GRACE CONDITIONS Nov 20 02:33:15.826: INFO: Nov 20 02:33:15.856: INFO: Logging node info for node capz-conf-clckq Nov 20 02:33:15.884: INFO: Node Info: &Node{ObjectMeta:{capz-conf-clckq 7b0dbe9f-6e88-4c01-99b1-2465612a0daf 15084 0 2022-11-20 01:10:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-clckq kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-95kvw cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.216.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:e4:64:fe volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:10:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:10:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:10:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-20 01:11:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-20 02:08:43 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kubelet.exe Update v1 2022-11-20 02:29:10 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-clckq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 02:29:10 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 02:29:10 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 02:29:10 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 02:29:10 +0000 UTC,LastTransitionTime:2022-11-20 01:10:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-clckq,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-clckq,SystemUUID:14041CED-10D5-4B34-9D4C-344B56A7FFCF,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 02:33:15.885: INFO: Logging kubelet events for node capz-conf-clckq Nov 20 02:33:15.913: INFO: Logging pods the kubelet thinks is on node capz-conf-clckq Nov 20 02:33:15.962: INFO: containerd-logger-g67b6 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:15.962: INFO: Container containerd-logger ready: true, restart count 0 Nov 20 02:33:15.962: INFO: calico-node-windows-v42gv started at 2022-11-20 01:10:05 +0000 UTC (1+2 container statuses recorded) Nov 20 02:33:15.962: INFO: Init container install-cni ready: true, restart count 0 Nov 20 02:33:15.962: INFO: Container calico-node-felix ready: true, restart count 1 Nov 20 02:33:15.962: INFO: Container calico-node-startup ready: true, restart count 0 Nov 20 02:33:15.962: INFO: csi-proxy-6bzv9 started at 2022-11-20 01:10:37 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:15.962: INFO: Container csi-proxy ready: true, restart count 0 Nov 20 02:33:15.962: INFO: kube-proxy-windows-g2j89 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:15.962: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 02:33:16.138: INFO: Latency metrics for node capz-conf-clckq Nov 20 02:33:16.138: INFO: Logging node info for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:33:16.167: INFO: Node Info: &Node{ObjectMeta:{capz-conf-fmlvhp-control-plane-b26jb c66af1fa-58b8-4558-8db4-48fd044f3e9e 15134 0 2022-11-20 01:06:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-fmlvhp-control-plane-b26jb kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-2] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-control-plane-vnvbt cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.89.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-20 01:06:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-20 01:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-20 01:07:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-20 02:29:40 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-fmlvhp-control-plane-b26jb,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-20 01:07:40 +0000 UTC,LastTransitionTime:2022-11-20 01:07:40 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 02:29:40 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 02:29:40 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 02:29:40 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 02:29:40 +0000 UTC,LastTransitionTime:2022-11-20 01:07:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-fmlvhp-control-plane-b26jb,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dd4b205b73c3437f8d4072eaa7e987bd,SystemUUID:6f6cc87d-f984-fb40-b2c2-f407cd2b06d2,BootID:db6fac5b-4561-4119-aa40-0dfa37daf137,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:135160275,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:124990265,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:57660216,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 02:33:16.168: INFO: Logging kubelet events for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:33:16.195: INFO: Logging pods the kubelet thinks is on node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:33:16.241: INFO: kube-controller-manager-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:16.241: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 20 02:33:16.241: INFO: coredns-787d4945fb-jnvnw started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:16.241: INFO: Container coredns ready: true, restart count 0 Nov 20 02:33:16.241: INFO: etcd-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:16.241: INFO: Container etcd ready: true, restart count 0 Nov 20 02:33:16.241: INFO: kube-apiserver-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:16.241: INFO: Container kube-apiserver ready: true, restart count 0 Nov 20 02:33:16.241: INFO: calico-node-2d9f6 started at 2022-11-20 01:07:18 +0000 UTC (2+1 container statuses recorded) Nov 20 02:33:16.241: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 20 02:33:16.241: INFO: Init container install-cni ready: true, restart count 0 Nov 20 02:33:16.241: INFO: Container calico-node ready: true, restart count 0 Nov 20 02:33:16.241: INFO: coredns-787d4945fb-w8th2 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:16.241: INFO: Container coredns ready: true, restart count 0 Nov 20 02:33:16.241: INFO: metrics-server-c9574f845-dwd4x started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:16.241: INFO: Container metrics-server ready: true, restart count 0 Nov 20 02:33:16.241: INFO: calico-kube-controllers-657b584867-kprw6 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:16.241: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 20 02:33:16.241: INFO: kube-scheduler-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:16.241: INFO: Container kube-scheduler ready: true, restart count 0 Nov 20 02:33:16.241: INFO: kube-proxy-grwp5 started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 02:33:16.241: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 02:33:16.390: INFO: Latency metrics for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:33:16.390: INFO: Logging node info for node capz-conf-j95hl Nov 20 02:33:16.419: INFO: Node Info: &Node{ObjectMeta:{capz-conf-j95hl 9874c50e-dbb9-48a0-a3e6-e1158b58eb2b 12781 0 2022-11-20 01:09:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-j95hl kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-9kkk6 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.119.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:a2:fd:fd volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:09:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {manager Update v1 2022-11-20 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-20 02:10:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2022-11-20 02:11:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-20 02:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} }]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-j95hl,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2022-11-20 02:11:06 +0000 UTC,},Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2022-11-20 02:11:11 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-j95hl,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-j95hl,SystemUUID:1EDFC854-811A-4EAC-947F-A7208BD291AA,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 02:33:16.419: INFO: Logging kubelet events for node capz-conf-j95hl Nov 20 02:33:16.447: INFO: Logging pods the kubelet thinks is on node capz-conf-j95hl Nov 20 02:33:46.476: INFO: Unable to retrieve kubelet pods for node capz-conf-j95hl: error trying to reach service: dial tcp 10.1.0.4:10250: i/o timeout [DeferCleanup (Each)] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "horizontal-pod-autoscaling-2274" for this suite. �[38;5;243m11/20/22 02:33:46.476�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-scheduling\]\sSchedulerPreemption\s\[Serial\]\svalidates\spod\sdisruption\scondition\sis\sadded\sto\sthe\spreempted\spod$'
test/e2e/scheduling/preemption.go:383 k8s.io/kubernetes/test/e2e/scheduling.glob..func5.5() test/e2e/scheduling/preemption.go:383 +0xa52from junit.kubetest.01.xml
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m11/20/22 02:46:16.252�[0m Nov 20 02:46:16.252: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP:�[0m Building a namespace api object, basename sched-preemption �[38;5;243m11/20/22 02:46:16.253�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m11/20/22 02:46:16.34�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m11/20/22 02:46:16.399�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:96 Nov 20 02:46:16.550: INFO: Waiting up to 1m0s for all nodes to be ready Nov 20 02:46:16.579: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:16.618: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:18.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:18.687: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:20.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:20.687: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:22.647: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:22.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:24.649: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:24.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:26.649: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:26.687: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:28.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:28.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:30.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:30.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:32.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:32.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:34.649: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:34.687: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:36.647: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:36.685: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:38.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:38.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:40.649: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:40.687: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:42.649: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:42.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:44.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:44.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:46.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:46.687: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:48.647: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:48.685: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:50.647: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:50.685: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:52.647: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:52.685: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:54.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:54.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:56.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:56.685: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:58.647: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:46:58.685: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:00.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:00.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:02.647: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:02.685: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:04.649: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:04.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:06.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:06.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:08.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:08.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:10.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:10.686: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:12.647: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:12.685: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:14.648: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:14.694: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:16.647: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:16.685: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:16.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:16.767: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:16.798: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:47:16.798: INFO: Waiting for terminating namespaces to be deleted... [It] validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:324 �[1mSTEP:�[0m Select a node to run the lower and higher priority pods �[38;5;243m11/20/22 02:47:16.826�[0m �[1mSTEP:�[0m Create a low priority pod that consumes 1/1 of node resources �[38;5;243m11/20/22 02:47:16.866�[0m Nov 20 02:47:16.904: INFO: Created pod: victim-pod �[1mSTEP:�[0m Wait for the victim pod to be scheduled �[38;5;243m11/20/22 02:47:16.904�[0m Nov 20 02:47:16.904: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-6380" to be "running" Nov 20 02:47:16.935: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 31.031395ms Nov 20 02:47:18.967: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062696833s Nov 20 02:47:20.966: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062096004s Nov 20 02:47:22.964: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.059576363s Nov 20 02:47:22.964: INFO: Pod "victim-pod" satisfied condition "running" �[1mSTEP:�[0m Create a high priority pod to trigger preemption of the lower priority pod �[38;5;243m11/20/22 02:47:22.964�[0m Nov 20 02:47:22.999: INFO: Created pod: preemptor-pod �[1mSTEP:�[0m Waiting for the victim pod to be terminating �[38;5;243m11/20/22 02:47:22.999�[0m Nov 20 02:47:22.999: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-6380" to be "is terminating" Nov 20 02:47:23.029: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 30.029983ms Nov 20 02:47:25.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.058835318s Nov 20 02:47:27.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.060036519s Nov 20 02:47:29.060: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.061418766s Nov 20 02:47:31.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.059052737s Nov 20 02:47:33.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 10.059262081s Nov 20 02:47:35.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 12.059875193s Nov 20 02:47:37.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 14.059097577s Nov 20 02:47:39.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 16.059687445s Nov 20 02:47:41.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 18.058910922s Nov 20 02:47:43.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 20.05916302s Nov 20 02:47:45.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 22.060506366s Nov 20 02:47:47.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 24.06035207s Nov 20 02:47:49.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 26.059997333s Nov 20 02:47:51.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 28.060709197s Nov 20 02:47:53.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 30.059615161s Nov 20 02:47:55.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 32.05882256s Nov 20 02:47:57.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 34.059746814s Nov 20 02:47:59.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 36.059337495s Nov 20 02:48:01.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 38.060168909s Nov 20 02:48:03.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 40.059876859s Nov 20 02:48:05.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 42.059774979s Nov 20 02:48:07.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 44.059042654s Nov 20 02:48:09.066: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 46.067634282s Nov 20 02:48:11.057: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 48.058744497s Nov 20 02:48:13.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 50.059339601s Nov 20 02:48:15.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 52.058910907s Nov 20 02:48:17.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 54.060149382s Nov 20 02:48:19.064: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 56.065117482s Nov 20 02:48:21.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 58.06042359s Nov 20 02:48:23.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m0.059348543s Nov 20 02:48:25.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m2.060132912s Nov 20 02:48:27.057: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m4.058744804s Nov 20 02:48:29.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m6.059234163s Nov 20 02:48:31.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.059171186s Nov 20 02:48:33.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m10.060054026s Nov 20 02:48:35.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m12.059501164s Nov 20 02:48:37.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m14.059143546s Nov 20 02:48:39.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m16.059999889s Nov 20 02:48:41.066: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m18.067700401s Nov 20 02:48:43.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m20.059741818s Nov 20 02:48:45.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m22.060299739s Nov 20 02:48:47.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m24.059076896s Nov 20 02:48:49.057: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m26.058736292s Nov 20 02:48:51.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m28.058933889s Nov 20 02:48:53.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m30.058825686s Nov 20 02:48:55.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m32.059097413s Nov 20 02:48:57.060: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m34.060926725s Nov 20 02:48:59.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m36.059578685s Nov 20 02:49:01.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m38.058774203s Nov 20 02:49:03.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m40.059335027s Nov 20 02:49:05.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m42.059768784s Nov 20 02:49:07.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m44.05893633s Nov 20 02:49:09.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m46.058917278s Nov 20 02:49:11.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m48.0603945s Nov 20 02:49:13.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m50.059683927s Nov 20 02:49:15.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m52.0591465s Nov 20 02:49:17.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m54.060641912s Nov 20 02:49:19.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m56.058939197s Nov 20 02:49:21.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 1m58.060098407s Nov 20 02:49:23.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m0.059330153s Nov 20 02:49:25.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m2.059887698s Nov 20 02:49:27.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m4.058957046s Nov 20 02:49:29.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m6.060025982s Nov 20 02:49:31.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m8.060419222s Nov 20 02:49:33.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m10.059355526s Nov 20 02:49:35.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m12.060477664s Nov 20 02:49:37.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m14.060351436s Nov 20 02:49:39.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m16.059777909s Nov 20 02:49:41.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m18.060120172s Nov 20 02:49:43.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m20.058897093s Nov 20 02:49:45.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m22.060010007s Nov 20 02:49:47.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m24.05944079s Nov 20 02:49:49.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m26.060156673s Nov 20 02:49:51.057: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m28.058721691s Nov 20 02:49:53.057: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m30.058593973s Nov 20 02:49:55.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m32.05965008s Nov 20 02:49:57.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m34.05960718s Nov 20 02:49:59.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m36.060504597s Nov 20 02:50:01.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m38.058846513s Nov 20 02:50:03.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m40.059096758s Nov 20 02:50:05.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m42.058931626s Nov 20 02:50:07.057: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m44.058624494s Nov 20 02:50:09.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m46.059277276s Nov 20 02:50:11.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m48.059632064s Nov 20 02:50:13.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m50.059502612s Nov 20 02:50:15.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m52.059117161s Nov 20 02:50:17.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m54.059688566s Nov 20 02:50:19.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m56.060689671s Nov 20 02:50:21.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2m58.058953316s Nov 20 02:50:23.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m0.059651408s Nov 20 02:50:25.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m2.06023097s Nov 20 02:50:27.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m4.0596253s Nov 20 02:50:29.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m6.059214365s Nov 20 02:50:31.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m8.058860447s Nov 20 02:50:33.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m10.058904562s Nov 20 02:50:35.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m12.060290161s Nov 20 02:50:37.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m14.05947733s Nov 20 02:50:39.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m16.05923207s Nov 20 02:50:41.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m18.059300399s Nov 20 02:50:43.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m20.059144692s Nov 20 02:50:45.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m22.060124804s Nov 20 02:50:47.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m24.06068952s Nov 20 02:50:49.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m26.060181947s Nov 20 02:50:51.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m28.060605811s Nov 20 02:50:53.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m30.059619158s Nov 20 02:50:55.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m32.059169444s Nov 20 02:50:57.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m34.060044037s Nov 20 02:50:59.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m36.058915426s Nov 20 02:51:01.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m38.058790587s Nov 20 02:51:03.060: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m40.061181116s Nov 20 02:51:05.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m42.06017574s Nov 20 02:51:07.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m44.060423155s Nov 20 02:51:09.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m46.05897094s Nov 20 02:51:11.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m48.059482987s Nov 20 02:51:13.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m50.059518659s Nov 20 02:51:15.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m52.060015785s Nov 20 02:51:17.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m54.059837026s Nov 20 02:51:19.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m56.059865298s Nov 20 02:51:21.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3m58.060219136s Nov 20 02:51:23.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m0.059069754s Nov 20 02:51:25.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m2.060318995s Nov 20 02:51:27.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m4.059074017s Nov 20 02:51:29.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m6.05982806s Nov 20 02:51:31.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m8.058995382s Nov 20 02:51:33.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m10.059092289s Nov 20 02:51:35.060: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m12.060909659s Nov 20 02:51:37.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m14.059863506s Nov 20 02:51:39.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m16.058894522s Nov 20 02:51:41.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m18.05916652s Nov 20 02:51:43.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m20.05993528s Nov 20 02:51:45.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m22.058987479s Nov 20 02:51:47.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m24.058962186s Nov 20 02:51:49.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m26.058993947s Nov 20 02:51:51.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m28.060263837s Nov 20 02:51:53.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m30.059353327s Nov 20 02:51:55.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m32.058975345s Nov 20 02:51:57.057: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m34.058694633s Nov 20 02:51:59.062: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m36.063282019s Nov 20 02:52:01.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m38.060073697s Nov 20 02:52:03.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m40.059280325s Nov 20 02:52:05.060: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m42.061715791s Nov 20 02:52:07.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m44.06038237s Nov 20 02:52:09.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m46.059924547s Nov 20 02:52:11.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m48.060135161s Nov 20 02:52:13.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m50.059200218s Nov 20 02:52:15.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m52.059441922s Nov 20 02:52:17.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m54.058984563s Nov 20 02:52:19.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m56.060644639s Nov 20 02:52:21.059: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4m58.060133357s Nov 20 02:52:23.058: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 5m0.059415417s Nov 20 02:52:23.087: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 5m0.087789848s Nov 20 02:52:23.088: INFO: Unexpected error: <*pod.timeoutError | 0xc003f57530>: { msg: "timed out while waiting for pod sched-preemption-6380/victim-pod to be is terminating", observedObjects: [ <*v1.Pod | 0xc003203680>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "victim-pod", GenerateName: "", Namespace: "sched-preemption-6380", SelfLink: "", UID: "8641e081-a014-49ed-96ce-52ae86757fe9", ResourceVersion: "16962", Generation: 0, CreationTimestamp: { Time: { wall: 0, ext: 63804509236, loc: {name: "UTC", zone: nil, tx: nil, extend: "", cacheStart: 0, cacheEnd: 0, cacheZone: nil}, }, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: nil, Annotations: { "cni.projectcalico.org/containerID": "5487ec1b298713a28c65e59a5f9f0d61d11b6ed73e6e3788cfbe14fe07f0248c", "cni.projectcalico.org/podIP": "192.168.216.121/32", "cni.projectcalico.org/podIPs": "192.168.216.121/32", }, OwnerReferences: nil, Finalizers: [ "example.com/test-finalizer", ], ManagedFields: [ { Manager: "e2e.test", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63804509236, loc: {name: "UTC", zone: nil, tx: nil, extend: "", cacheStart: 0, cacheEnd: 0, cacheZone: nil}, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:finalizers\":{\".\":{},\"v:\\\"example.com/test-finalizer\\\"\":{}}},\"f:spec\":{\"f:affinity\":{\".\":{},\"f:nodeAffinity\":{\".\":{},\"f:requiredDuringSchedulingIgnoredDuringExecution\":{}}},\"f:containers\":{\"k:{\\\"name\\\":\\\"victim-pod\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{\".\":{},\"f:limits\":{\".\":{},\"f:scheduling.k8s.io/foo\":{}},\"f:requests\":{\".\":{},\"f:scheduling.k8s.io/foo\":{}}},\"f:securityContext\":{\".\":{},\"f:allowPrivilegeEscalation\":{},\"f:capabilities\":{\".\":{},\"f:drop\":{}}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:priorityClassName\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{\".\":{},\"f:runAsNonRoot\":{},\"f:seccompProfile\":{\".\":{},\"f:type\":{}},\"f:windowsOptions\":{\".\":{},\"f:runAsUserName\":{}}},\"f:terminationGracePeriodSeconds\":{}}}", }, Subresource: "", }, { Manager: "Go-http-client", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63804509237, loc: {name: "UTC", zone: nil, tx: nil, extend: "", cacheStart: 0, cacheEnd: 0, cacheZone: nil}, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:cni.projectcalico.org/containerID\":{},\"f:cni.projectcalico.org/podIP\":{},\"f:cni.projectcalico.org/podIPs\":{}}}}", }, Subresource: "status", }, { Manager: "kubelet.exe", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63804509241, loc: {name: "UTC", zone: nil, tx: nil, extend: "", cacheStart: 0, cacheEnd: 0, cacheZone: nil}, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:conditions\":{\"k:{\\\"type\\\":\\\"ContainersReady\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Initialized\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}}},\"f:containerStatuses\":{},\"f:hostIP\":{},\"f:phase\":{},\"f:podIP\":{},\"f:podIPs\":{\".\":{},\"k:{\\\"ip\\\":\\\"192.168.216.121\\\"}\":{\".\":{},\"f:ip\":{}}},\"f:startTime\":{}}}", }, Subresource: "status", }, ], }, Spec: { Volumes: [ { Name: "kube-api-access-9sczv", VolumeSource: { HostPath: nil, EmptyDir: nil, GCEPersistentDisk: nil, AWSElasticBlockStore: nil, GitRepo: nil, Secret: nil, NFS: nil, ISCSI: nil, Glusterfs: nil, PersistentVolumeClaim: nil, RBD: nil, FlexVolume: nil, Cinder: nil, CephFS: nil, Flocker: nil, DownwardAPI: nil, FC: nil, AzureFile: nil, ConfigMap: nil, VsphereVolume: nil, Quobyte: nil, AzureDisk: nil, PhotonPersistentDisk: nil, Projected: { Sources: [ { Secret: ..., DownwardAPI: ..., ConfigMap: ..., ServiceAccountToken: ..., }, { Secret: ..., DownwardAPI: ..., ConfigMap: ..., ServiceAccountToken: ..., }, { Secret: ..., DownwardAPI: ..., ConfigMap: ..., ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Nov 20 02:52:23.088: FAIL: timed out while waiting for pod sched-preemption-6380/victim-pod to be is terminating Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func5.5() test/e2e/scheduling/preemption.go:383 +0xa52 Nov 20 02:52:23.089: INFO: Removing pod's "victim-pod" finalizer: "example.com/test-finalizer" Nov 20 02:52:23.663: INFO: Successfully updated pod "victim-pod" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Nov 20 02:52:23.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Nov 20 02:52:23.697: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:25.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:27.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:29.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:31.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:33.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:35.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:37.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:39.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:41.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:43.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:45.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:47.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:49.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:51.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:53.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:55.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:57.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:52:59.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:01.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:03.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:05.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:07.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:09.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:11.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:13.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:15.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:17.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:19.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:21.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:23.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:25.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:27.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:29.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:31.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:33.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:35.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:37.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:39.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:41.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:43.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:45.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:47.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:49.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:51.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:53.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:55.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:57.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:53:59.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:01.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:03.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:05.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:07.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:09.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:11.731: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:13.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:15.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:17.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:19.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:21.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:23.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:25.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:27.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:29.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:31.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:33.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:35.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:37.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:39.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:41.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:43.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:45.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:47.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:49.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:51.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:53.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:55.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:57.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:54:59.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:01.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:03.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:05.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:07.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:09.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:11.730: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:13.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:15.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:17.727: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:19.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:21.729: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:23.728: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure Nov 20 02:55:23.759: INFO: Condition Ready of node capz-conf-j95hl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2022-11-20 02:11:06 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2022-11-20 02:11:11 +0000 UTC}]. Failure [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m11/20/22 02:55:23.895�[0m �[1mSTEP:�[0m Collecting events from namespace "sched-preemption-6380". �[38;5;243m11/20/22 02:55:23.895�[0m �[1mSTEP:�[0m Found 8 events. �[38;5;243m11/20/22 02:55:23.923�[0m Nov 20 02:55:23.924: INFO: At 2022-11-20 02:47:16 +0000 UTC - event for victim-pod: {default-scheduler } Scheduled: Successfully assigned sched-preemption-6380/victim-pod to capz-conf-clckq Nov 20 02:55:23.924: INFO: At 2022-11-20 02:47:19 +0000 UTC - event for victim-pod: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Nov 20 02:55:23.924: INFO: At 2022-11-20 02:47:19 +0000 UTC - event for victim-pod: {kubelet capz-conf-clckq} Created: Created container victim-pod Nov 20 02:55:23.924: INFO: At 2022-11-20 02:47:21 +0000 UTC - event for victim-pod: {kubelet capz-conf-clckq} Started: Started container victim-pod Nov 20 02:55:23.924: INFO: At 2022-11-20 02:47:22 +0000 UTC - event for preemptor-pod: {default-scheduler } Scheduled: Successfully assigned sched-preemption-6380/preemptor-pod to capz-conf-clckq Nov 20 02:55:23.924: INFO: At 2022-11-20 02:47:26 +0000 UTC - event for preemptor-pod: {kubelet capz-conf-clckq} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Nov 20 02:55:23.924: INFO: At 2022-11-20 02:47:26 +0000 UTC - event for preemptor-pod: {kubelet capz-conf-clckq} Created: Created container preemptor-pod Nov 20 02:55:23.924: INFO: At 2022-11-20 02:47:27 +0000 UTC - event for preemptor-pod: {kubelet capz-conf-clckq} Started: Started container preemptor-pod Nov 20 02:55:23.952: INFO: POD NODE PHASE GRACE CONDITIONS Nov 20 02:55:23.952: INFO: preemptor-pod capz-conf-clckq Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:47:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:47:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:47:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:47:22 +0000 UTC }] Nov 20 02:55:23.952: INFO: victim-pod capz-conf-clckq Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:47:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:47:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:47:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-20 02:47:16 +0000 UTC }] Nov 20 02:55:23.952: INFO: Nov 20 02:55:24.070: INFO: Logging node info for node capz-conf-clckq Nov 20 02:55:24.099: INFO: Node Info: &Node{ObjectMeta:{capz-conf-clckq 7b0dbe9f-6e88-4c01-99b1-2465612a0daf 17652 0 2022-11-20 01:10:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-clckq kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-95kvw cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.216.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:e4:64:fe volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:10:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:10:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:10:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {manager Update v1 2022-11-20 01:11:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {e2e.test Update v1 2022-11-20 02:47:16 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kubelet.exe Update v1 2022-11-20 02:52:29 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-clckq,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},example.com/fakecpu: {{1 3} {<nil>} 1k DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {<nil>} 1 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 02:52:29 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 02:52:29 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 02:52:29 +0000 UTC,LastTransitionTime:2022-11-20 01:10:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 02:52:29 +0000 UTC,LastTransitionTime:2022-11-20 01:10:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-clckq,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-clckq,SystemUUID:14041CED-10D5-4B34-9D4C-344B56A7FFCF,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 02:55:24.100: INFO: Logging kubelet events for node capz-conf-clckq Nov 20 02:55:24.129: INFO: Logging pods the kubelet thinks is on node capz-conf-clckq Nov 20 02:55:24.165: INFO: calico-node-windows-v42gv started at 2022-11-20 01:10:05 +0000 UTC (1+2 container statuses recorded) Nov 20 02:55:24.165: INFO: Init container install-cni ready: true, restart count 0 Nov 20 02:55:24.165: INFO: Container calico-node-felix ready: true, restart count 1 Nov 20 02:55:24.165: INFO: Container calico-node-startup ready: true, restart count 0 Nov 20 02:55:24.165: INFO: csi-proxy-6bzv9 started at 2022-11-20 01:10:37 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.165: INFO: Container csi-proxy ready: true, restart count 0 Nov 20 02:55:24.165: INFO: kube-proxy-windows-g2j89 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.165: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 02:55:24.165: INFO: victim-pod started at 2022-11-20 02:47:16 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.165: INFO: Container victim-pod ready: true, restart count 0 Nov 20 02:55:24.165: INFO: containerd-logger-g67b6 started at 2022-11-20 01:10:05 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.165: INFO: Container containerd-logger ready: true, restart count 0 Nov 20 02:55:24.165: INFO: preemptor-pod started at 2022-11-20 02:47:22 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.165: INFO: Container preemptor-pod ready: true, restart count 0 Nov 20 02:55:24.350: INFO: Latency metrics for node capz-conf-clckq Nov 20 02:55:24.350: INFO: Logging node info for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:55:24.378: INFO: Node Info: &Node{ObjectMeta:{capz-conf-fmlvhp-control-plane-b26jb c66af1fa-58b8-4558-8db4-48fd044f3e9e 17632 0 2022-11-20 01:06:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:eastus-2 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-fmlvhp-control-plane-b26jb kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:eastus-2] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-control-plane-vnvbt cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.89.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-20 01:06:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {manager Update v1 2022-11-20 01:07:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kube-controller-manager Update v1 2022-11-20 01:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} } {Go-http-client Update v1 2022-11-20 01:07:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-20 02:55:12 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-fmlvhp-control-plane-b26jb,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344723456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239865856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-20 01:07:40 +0000 UTC,LastTransitionTime:2022-11-20 01:07:40 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-20 02:55:12 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-20 02:55:12 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-20 02:55:12 +0000 UTC,LastTransitionTime:2022-11-20 01:06:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-20 02:55:12 +0000 UTC,LastTransitionTime:2022-11-20 01:07:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-fmlvhp-control-plane-b26jb,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dd4b205b73c3437f8d4072eaa7e987bd,SystemUUID:6f6cc87d-f984-fb40-b2c2-f407cd2b06d2,BootID:db6fac5b-4561-4119-aa40-0dfa37daf137,KernelVersion:5.4.0-1091-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:135160275,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:124990265,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:b83c1d70989e1fe87583607bf5aee1ee34e52773d4755b95f5cf5a451962f3a4 registry.k8s.io/etcd:3.5.5-0],SizeBytes:102417044,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:67201736,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.32_57eb5d631ccd61 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.32_57eb5d631ccd61],SizeBytes:57660216,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3 registry.k8s.io/kube-apiserver:v1.25.3],SizeBytes:34238163,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91 registry.k8s.io/kube-controller-manager:v1.25.3],SizeBytes:31261869,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f registry.k8s.io/kube-proxy:v1.25.3],SizeBytes:20265805,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e registry.k8s.io/kube-scheduler:v1.25.3],SizeBytes:15798744,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 02:55:24.379: INFO: Logging kubelet events for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:55:24.408: INFO: Logging pods the kubelet thinks is on node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:55:24.453: INFO: etcd-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.453: INFO: Container etcd ready: true, restart count 0 Nov 20 02:55:24.453: INFO: kube-apiserver-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.453: INFO: Container kube-apiserver ready: true, restart count 0 Nov 20 02:55:24.453: INFO: kube-controller-manager-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:01 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.453: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 20 02:55:24.453: INFO: coredns-787d4945fb-jnvnw started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.453: INFO: Container coredns ready: true, restart count 0 Nov 20 02:55:24.453: INFO: kube-scheduler-capz-conf-fmlvhp-control-plane-b26jb started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.453: INFO: Container kube-scheduler ready: true, restart count 0 Nov 20 02:55:24.453: INFO: kube-proxy-grwp5 started at 2022-11-20 01:07:02 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.453: INFO: Container kube-proxy ready: true, restart count 0 Nov 20 02:55:24.453: INFO: calico-node-2d9f6 started at 2022-11-20 01:07:18 +0000 UTC (2+1 container statuses recorded) Nov 20 02:55:24.453: INFO: Init container upgrade-ipam ready: true, restart count 0 Nov 20 02:55:24.453: INFO: Init container install-cni ready: true, restart count 0 Nov 20 02:55:24.453: INFO: Container calico-node ready: true, restart count 0 Nov 20 02:55:24.453: INFO: coredns-787d4945fb-w8th2 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.453: INFO: Container coredns ready: true, restart count 0 Nov 20 02:55:24.453: INFO: metrics-server-c9574f845-dwd4x started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.453: INFO: Container metrics-server ready: true, restart count 0 Nov 20 02:55:24.453: INFO: calico-kube-controllers-657b584867-kprw6 started at 2022-11-20 01:07:32 +0000 UTC (0+1 container statuses recorded) Nov 20 02:55:24.453: INFO: Container calico-kube-controllers ready: true, restart count 0 Nov 20 02:55:24.590: INFO: Latency metrics for node capz-conf-fmlvhp-control-plane-b26jb Nov 20 02:55:24.590: INFO: Logging node info for node capz-conf-j95hl Nov 20 02:55:24.619: INFO: Node Info: &Node{ObjectMeta:{capz-conf-j95hl 9874c50e-dbb9-48a0-a3e6-e1158b58eb2b 12781 0 2022-11-20 01:09:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:eastus failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-conf-j95hl kubernetes.io/os:windows node.kubernetes.io/instance-type:Standard_D4s_v3 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:eastus topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-conf-fmlvhp cluster.x-k8s.io/cluster-namespace:capz-conf-fmlvhp cluster.x-k8s.io/machine:capz-conf-fmlvhp-md-win-59d5d57569-9kkk6 cluster.x-k8s.io/owner-kind:MachineSet cluster.x-k8s.io/owner-name:capz-conf-fmlvhp-md-win-59d5d57569 kubeadm.alpha.kubernetes.io/cri-socket:npipe:////./pipe/containerd-containerd node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.119.65 projectcalico.org/VXLANTunnelMACAddr:00:15:5d:a2:fd:fd volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2022-11-20 01:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-11-20 01:09:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {manager Update v1 2022-11-20 01:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {Go-http-client Update v1 2022-11-20 01:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{},"f:projectcalico.org/VXLANTunnelMACAddr":{}}}} status} {kubelet.exe Update v1 2022-11-20 02:10:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2022-11-20 02:11:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-20 02:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}} }]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-conf-fmlvhp/providers/Microsoft.Compute/virtualMachines/capz-conf-j95hl,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2022-11-20 02:11:06 +0000 UTC,},Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2022-11-20 02:11:11 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{136912564224 0} {<nil>} 133703676Ki BinarySI},memory: {{17179398144 0} {<nil>} 16776756Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{123221307598 0} {<nil>} 123221307598 DecimalSI},memory: {{17074540544 0} {<nil>} 16674356Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2022-11-20 02:10:09 +0000 UTC,LastTransitionTime:2022-11-20 02:11:06 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-conf-j95hl,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:capz-conf-j95hl,SystemUUID:1EDFC854-811A-4EAC-947F-A7208BD291AA,BootID:9,KernelVersion:10.0.17763.3406,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,KubeProxyVersion:v1.27.0-alpha.0.32+57eb5d631ccd61,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:269513752,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:207280609,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:203697965,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:168375296,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:167222041,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger@sha256:63bf2aa9db909d0d90fb5205abf7fb2a6d9a494b89cbd2508a42457dfc875505 ghcr.io/kubernetes-sigs/sig-windows/eventflow-logger:v0.1.0],SizeBytes:133732668,},ContainerImage{Names:[docker.io/sigwindowstools/kube-proxy:v1.23.1-calico-hostprocess docker.io/sigwindowstools/kube-proxy:v1.27.0-alpha.0.32_57eb5d631ccd61-calico-hostprocess],SizeBytes:116182072,},ContainerImage{Names:[ghcr.io/kubernetes-sigs/sig-windows/csi-proxy@sha256:96b4144986319a747ba599892454be2737aae6005d96b8e13ed481321ac3afba ghcr.io/kubernetes-sigs/sig-windows/csi-proxy:v1.0.2],SizeBytes:109639330,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/resource-consumer@sha256:ba5e047a337e5d0709bc57df45b95b2c7f6f2794b290e4e24f7fc8980d60b25a registry.k8s.io/e2e-test-images/resource-consumer:1.13],SizeBytes:106357351,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:104158827,},ContainerImage{Names:[docker.io/sigwindowstools/calico-install@sha256:1dac2d6534d9017f8967cc6238d6b448bdc1c978b5e8fea91bf39dc59d29881f docker.io/sigwindowstools/calico-install:v3.23.0-hostprocess],SizeBytes:47258351,},ContainerImage{Names:[docker.io/sigwindowstools/calico-node@sha256:6ea7a987c109fdc059a36bf4abc5267c6f3de99d02ef6e84f0826da2aa435ea5 docker.io/sigwindowstools/calico-node:v3.23.0-hostprocess],SizeBytes:27005594,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 20 02:55:24.619: INFO: Logging kubelet events for node capz-conf-j95hl Nov 20 02:55:24.647: INFO: Logging pods the kubelet thinks is on node capz-conf-j95hl Nov 20 02:55:54.681: INFO: Unable to retrieve kubelet pods for node capz-conf-j95hl: error trying to reach service: dial tcp 10.1.0.4:10250: i/o timeout [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "sched-preemption-6380" for this suite. �[38;5;243m11/20/22 02:55:54.681�[0m
Filter through log files | View test history on testgrid
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]
Kubernetes e2e suite [It] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds
Kubernetes e2e suite [It] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] Allocatable node memory should be equal to a calculated allocatable memory value
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
capz-e2e Conformance Tests conformance-tests
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range over two stabilization windows
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down to 0
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down with Prometheus
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target average value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Container Resource and External Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Pod and Object Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Pod and External metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Resource and Object metrics)
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl events should show event when pod is created
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] DNS HostNetwork should resolve DNS of partial qualified names for services on hostNetwork pods with dnsPolicy: ClusterFirstWithHostNet [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] DNS should work with the pod containing more than 6 DNS search paths and longer than 256 search list characters
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on different nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on the same nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to up and down services
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve endpoints on same port and different protocol for internal traffic on Type LoadBalancer
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after the service has been recreated
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [It] [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] driver supports claim and class parameters
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must not run a pod if a claim is not reserved for it
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must retry NodePrepareResource
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must unprepare resources for force-deleted pod
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet registers plugin
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple drivers work
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes reallocation works
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with network-attached resources schedules onto different nodes
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with delayed allocation uses all resources
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with immediate allocation uses all resources
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [It] [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] pods evicted from tainted nodes have pod disruption condition
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [It] [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [It] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [It] [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should patch a pod status [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [It] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must create the user namespace if set to false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must not create the user namespace if set to true [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should mount all volumes with proper permissions with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should set FSGroup to user inside the container with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context when if the container's primary UID belongs to some groups in the image [LinuxOnly] should add pod.Spec.SecurityContext.SupplementalGroups to them [LinuxOnly] in resultant supplementary groups for the container processes
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [It] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [It] [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet kubectl node-logs <node-name> [Feature:add node log viewer] should return the logs
Kubernetes e2e suite [It] [sig-node] kubelet kubectl node-logs <node-name> [Feature:add node log viewer] should return the logs for the provided path
Kubernetes e2e suite [It] [sig-node] kubelet kubectl node-logs <node-name> [Feature:add node log viewer] should return the logs for the requested service
Kubernetes e2e suite [It] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should list, patch and delete a LimitRange by collection [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates Pods with non-empty schedulingGates are blocked on scheduling [Feature:PodSchedulingReadiness] [alpha]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit dynamic CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit pre-provisioned CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume Snapshots [Feature:VolumeSnapshotDataSource] volumesnapshotcontent and pvc in Bound state with deletion timestamp set should not get deleted while snapshot finalizer exists
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume Snapshots secrets [Feature:VolumeSnapshotDataSource] volume snapshot create/delete with secrets
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for generic ephemeral volume when persistent volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for persistent volume when generic ephemeral volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI mock volume SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should add SELinux mount option to existing mount options
Kubernetes e2e suite [It] [sig-storage] CSI mock volume SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for CSI driver that does not support SELinux mount
Kubernetes e2e suite [It] [sig-storage] CSI mock volume SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for Pod without SELinux context
Kubernetes e2e suite [It] [sig-storage] CSI mock volume SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for RWO volume
Kubernetes e2e suite [It] [sig-storage] CSI mock volume SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should pass SELinux mount option for RWOP volume and Pod with SELinux context set
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, immediate binding
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity unlimited
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support CSIVolumeSource in Pod API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by changing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by removing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should create and delete default persistent volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] deletion should be idempotent
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with different parameters
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with non-default reclaim policy Retain
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should test that deleting a claim before the volume is provisioned deletes the volume.
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when attachable [Feature:Flexvolumes]
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when non-attachable
Kubernetes e2e suite [It] [sig-storage] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to