Result | FAILURE |
Tests | 9 failed / 321 succeeded |
Started | |
Elapsed | 1h36m |
Revision | master |
control_plane_node_os_image | cos-85-13310-1308-1 |
job-version | v1.24.11-rc.0.11+73da4d3652771d |
kubetest-version | v20230127-9396ca613c |
revision | v1.24.11-rc.0.11+73da4d3652771d |
worker_node_os_image | cos-85-13310-1308-1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\srolling\supdates\sand\sroll\sbacks\sof\stemplate\smodifications\s\[Conformance\]$'
test/e2e/framework/framework.go:652 Feb 2 22:20:01.565: Failed waiting for state update: timed out waiting for the condition test/e2e/apps/wait.go:124from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 22:09:08.785: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP�[0m: Creating service test in namespace statefulset-4695 [It] should perform rolling updates and roll backs of template modifications [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating a new StatefulSet Feb 2 22:09:09.316: INFO: Found 1 stateful pods, waiting for 3 Feb 2 22:09:19.361: INFO: Found 2 stateful pods, waiting for 3 Feb 2 22:09:29.435: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:09:29.435: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:09:29.435: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 2 22:09:39.360: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:09:39.360: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:09:39.360: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:09:39.496: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4695 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 22:09:40.146: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Feb 2 22:09:40.146: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 22:09:40.146: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' �[1mSTEP�[0m: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Feb 2 22:09:50.435: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Updating Pods in reverse ordinal order Feb 2 22:09:50.564: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4695 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 22:09:51.131: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Feb 2 22:09:51.131: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 2 22:09:51.131: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 2 22:10:01.393: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:10:01.393: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:01.393: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:01.393: INFO: Waiting for Pod statefulset-4695/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:11.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:10:11.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:11.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:11.479: INFO: Waiting for Pod statefulset-4695/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:21.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:10:21.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:21.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:31.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:10:31.480: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:31.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:41.480: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:10:41.480: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:41.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:51.480: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:10:51.480: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:10:51.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:01.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:11:01.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:01.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:11.508: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:11:11.508: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:11.508: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:21.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:11:21.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:21.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:31.477: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:11:31.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:31.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:41.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:11:41.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:41.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:51.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:11:51.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:11:51.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:01.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:12:01.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:01.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:11.481: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:12:11.481: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:11.481: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:21.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:12:21.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:21.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:31.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:12:31.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:31.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:41.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:12:41.480: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:41.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:51.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:12:51.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:12:51.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:01.477: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:13:01.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:01.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:11.480: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:13:11.480: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:11.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:21.481: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:13:21.481: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:21.481: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:31.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:13:31.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:31.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:41.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:13:41.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:41.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:51.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:13:51.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:13:51.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:01.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:14:01.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:01.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:11.482: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:14:11.482: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:11.482: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:21.499: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:14:21.499: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:21.499: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:31.480: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:14:31.480: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:31.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:41.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:14:41.480: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:41.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:51.480: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:14:51.480: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:14:51.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:01.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:15:01.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:01.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:11.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:15:11.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:11.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:21.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:15:21.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:21.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:31.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:15:31.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:31.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:41.477: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:15:41.477: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:41.477: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:51.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:15:51.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:15:51.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:01.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:16:01.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:01.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:11.480: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:16:11.480: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:11.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:21.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:16:21.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:21.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:31.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:16:31.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:31.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:41.477: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:16:41.477: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:41.477: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:51.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:16:51.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:16:51.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:01.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:17:01.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:01.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:11.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:17:11.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:11.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:21.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:17:21.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:21.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:31.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:17:31.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:31.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:41.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:17:41.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:41.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:51.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:17:51.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:17:51.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:01.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:18:01.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:01.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:11.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:18:11.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:11.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:21.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:18:21.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:21.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:31.477: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:18:31.477: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:31.477: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:41.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:18:41.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:41.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:51.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:18:51.480: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:18:51.480: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:01.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:19:01.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:01.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:11.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:19:11.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:11.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:21.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:19:21.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:21.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:31.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:19:31.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:31.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:41.478: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:19:41.478: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:41.478: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:51.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:19:51.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:19:51.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:20:01.479: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:20:01.479: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:20:01.479: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:20:01.564: INFO: Waiting for StatefulSet statefulset-4695/ss2 to complete update Feb 2 22:20:01.564: INFO: Waiting for Pod statefulset-4695/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:20:01.564: INFO: Waiting for Pod statefulset-4695/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Feb 2 22:20:01.565: FAIL: Failed waiting for state update: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/apps.waitForRollingUpdate({0x7aa8ff8, 0xc003bd1200}, 0xc000ccea00) test/e2e/apps/wait.go:124 +0x1cc k8s.io/kubernetes/test/e2e/apps.rollbackTest({0x7aa8ff8, 0xc003bd1200}, {0xc004243120, 0x10}, 0xc000cfca00) test/e2e/apps/statefulset.go:1605 +0xabd k8s.io/kubernetes/test/e2e/apps.glob..func9.2.7() test/e2e/apps/statefulset.go:307 +0xe6 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e5201?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000d124e0, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Feb 2 22:20:01.609: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4695 describe po ss2-0' Feb 2 22:20:01.899: INFO: stderr: "" Feb 2 22:20:01.899: INFO: stdout: "Name: ss2-0\nNamespace: statefulset-4695\nPriority: 0\nNode: e2e-7d89e54d79-37bac-windows-node-group-q21f/10.40.0.5\nStart Time: Thu, 02 Feb 2023 22:09:09 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-57bbdd95cb\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations: <none>\nStatus: Running\nIP: 10.64.3.88\nIPs:\n IP: 10.64.3.88\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://04664ce096f4b78be4850f3a8455c5abd090cc7e47e8371ef95062da467d72b2\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\n Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Thu, 02 Feb 2023 22:09:14 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbl9x (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-gbl9x:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-4695/ss2-0 to e2e-7d89e54d79-37bac-windows-node-group-q21f\n Warning FailedMount 10m kubelet MountVolume.SetUp failed for volume \"kube-api-access-gbl9x\" : failed to sync configmap cache: timed out waiting for the condition\n Normal Pulled 10m kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\" already present on machine\n Normal Created 10m kubelet Created container webserver\n Normal Started 10m kubelet Started container webserver\n Warning Unhealthy 10m (x2 over 10m) kubelet Readiness probe failed: Get \"http://10.64.3.88:80/index.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n" Feb 2 22:20:01.899: INFO: Output of kubectl describe ss2-0: Name: ss2-0 Namespace: statefulset-4695 Priority: 0 Node: e2e-7d89e54d79-37bac-windows-node-group-q21f/10.40.0.5 Start Time: Thu, 02 Feb 2023 22:09:09 +0000 Labels: baz=blah controller-revision-hash=ss2-57bbdd95cb foo=bar statefulset.kubernetes.io/pod-name=ss2-0 Annotations: <none> Status: Running IP: 10.64.3.88 IPs: IP: 10.64.3.88 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: containerd://04664ce096f4b78be4850f3a8455c5abd090cc7e47e8371ef95062da467d72b2 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 Port: <none> Host Port: <none> State: Running Started: Thu, 02 Feb 2023 22:09:14 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gbl9x (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-gbl9x: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-4695/ss2-0 to e2e-7d89e54d79-37bac-windows-node-group-q21f Warning FailedMount 10m kubelet MountVolume.SetUp failed for volume "kube-api-access-gbl9x" : failed to sync configmap cache: timed out waiting for the condition Normal Pulled 10m kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Normal Created 10m kubelet Created container webserver Normal Started 10m kubelet Started container webserver Warning Unhealthy 10m (x2 over 10m) kubelet Readiness probe failed: Get "http://10.64.3.88:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 2 22:20:01.899: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4695 logs ss2-0 --tail=100' Feb 2 22:20:02.145: INFO: stderr: "" Feb 2 22:20:02.146: INFO: stdout: "" Feb 2 22:20:02.146: INFO: Last 100 log lines of ss2-0: Feb 2 22:20:02.146: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4695 describe po ss2-1' Feb 2 22:20:02.441: INFO: stderr: "" Feb 2 22:20:02.441: INFO: stdout: "Name: ss2-1\nNamespace: statefulset-4695\nPriority: 0\nNode: e2e-7d89e54d79-37bac-windows-node-group-k0qm/10.40.0.3\nStart Time: Thu, 02 Feb 2023 22:09:17 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-57bbdd95cb\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-1\nAnnotations: <none>\nStatus: Terminating (lasts 9m4s)\nTermination Grace Period: 30s\nIP: 10.64.1.28\nIPs:\n IP: 10.64.1.28\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://c4b8832f09fd046d5a3b33dd11504d3da65e53ca2a24f4be61a8dff367e27b24\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\n Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\n Port: <none>\n Host Port: <none>\n State: Terminated\n Reason: Completed\n Exit Code: 0\n Started: Thu, 02 Feb 2023 22:09:24 +0000\n Finished: Thu, 02 Feb 2023 22:10:46 +0000\n Ready: False\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2lzj6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-2lzj6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-4695/ss2-1 to e2e-7d89e54d79-37bac-windows-node-group-k0qm\n Normal Pulled 10m kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\" already present on machine\n Normal Created 10m kubelet Created container webserver\n Normal Started 10m kubelet Started container webserver\n Warning Unhealthy 10m (x12 over 10m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404\n Normal Killing 9m34s kubelet Stopping container webserver\n Warning Unhealthy 9m26s (x9 over 10m) kubelet Readiness probe failed: Get \"http://10.64.1.28:80/index.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n Warning FailedKillPod 5m13s (x2 over 7m13s) kubelet error killing pod: failed to \"KillPodSandbox\" for \"50945517-7e5d-4d57-adca-bd2df1c7b8f3\" with KillPodSandboxError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"\n" Feb 2 22:20:02.441: INFO: Output of kubectl describe ss2-1: Name: ss2-1 Namespace: statefulset-4695 Priority: 0 Node: e2e-7d89e54d79-37bac-windows-node-group-k0qm/10.40.0.3 Start Time: Thu, 02 Feb 2023 22:09:17 +0000 Labels: baz=blah controller-revision-hash=ss2-57bbdd95cb foo=bar statefulset.kubernetes.io/pod-name=ss2-1 Annotations: <none> Status: Terminating (lasts 9m4s) Termination Grace Period: 30s IP: 10.64.1.28 IPs: IP: 10.64.1.28 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: containerd://c4b8832f09fd046d5a3b33dd11504d3da65e53ca2a24f4be61a8dff367e27b24 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 Port: <none> Host Port: <none> State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 02 Feb 2023 22:09:24 +0000 Finished: Thu, 02 Feb 2023 22:10:46 +0000 Ready: False Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2lzj6 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-2lzj6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-4695/ss2-1 to e2e-7d89e54d79-37bac-windows-node-group-k0qm Normal Pulled 10m kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Normal Created 10m kubelet Created container webserver Normal Started 10m kubelet Started container webserver Warning Unhealthy 10m (x12 over 10m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404 Normal Killing 9m34s kubelet Stopping container webserver Warning Unhealthy 9m26s (x9 over 10m) kubelet Readiness probe failed: Get "http://10.64.1.28:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning FailedKillPod 5m13s (x2 over 7m13s) kubelet error killing pod: failed to "KillPodSandbox" for "50945517-7e5d-4d57-adca-bd2df1c7b8f3" with KillPodSandboxError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded" Feb 2 22:20:02.441: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4695 logs ss2-1 --tail=100' Feb 2 22:20:02.703: INFO: stderr: "" Feb 2 22:20:02.703: INFO: stdout: "" Feb 2 22:20:02.703: INFO: Last 100 log lines of ss2-1: Feb 2 22:20:02.703: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4695 describe po ss2-2' Feb 2 22:20:02.994: INFO: stderr: "" Feb 2 22:20:02.994: INFO: stdout: "Name: ss2-2\nNamespace: statefulset-4695\nPriority: 0\nNode: e2e-7d89e54d79-37bac-windows-node-group-jllf/10.40.0.4\nStart Time: Thu, 02 Feb 2023 22:10:21 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-5f8764d585\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-2\nAnnotations: <none>\nStatus: Running\nIP: 10.64.2.74\nIPs:\n IP: 10.64.2.74\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://1c7000a8dd9ebb210a92b93223eafff6b7c36aad5f4695696627c38a865b002b\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.39-2\n Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Thu, 02 Feb 2023 22:10:26 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4gfg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-f4gfg:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9m41s default-scheduler Successfully assigned statefulset-4695/ss2-2 to e2e-7d89e54d79-37bac-windows-node-group-jllf\n Normal Pulled 9m38s kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-2\" already present on machine\n Normal Created 9m38s kubelet Created container webserver\n Normal Started 9m36s kubelet Started container webserver\n Warning Unhealthy 9m35s kubelet Readiness probe failed: Get \"http://10.64.2.74:80/index.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n" Feb 2 22:20:02.995: INFO: Output of kubectl describe ss2-2: Name: ss2-2 Namespace: statefulset-4695 Priority: 0 Node: e2e-7d89e54d79-37bac-windows-node-group-jllf/10.40.0.4 Start Time: Thu, 02 Feb 2023 22:10:21 +0000 Labels: baz=blah controller-revision-hash=ss2-5f8764d585 foo=bar statefulset.kubernetes.io/pod-name=ss2-2 Annotations: <none> Status: Running IP: 10.64.2.74 IPs: IP: 10.64.2.74 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: containerd://1c7000a8dd9ebb210a92b93223eafff6b7c36aad5f4695696627c38a865b002b Image: k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 Port: <none> Host Port: <none> State: Running Started: Thu, 02 Feb 2023 22:10:26 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4gfg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-f4gfg: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m41s default-scheduler Successfully assigned statefulset-4695/ss2-2 to e2e-7d89e54d79-37bac-windows-node-group-jllf Normal Pulled 9m38s kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.39-2" already present on machine Normal Created 9m38s kubelet Created container webserver Normal Started 9m36s kubelet Started container webserver Warning Unhealthy 9m35s kubelet Readiness probe failed: Get "http://10.64.2.74:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 2 22:20:02.995: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=statefulset-4695 logs ss2-2 --tail=100' Feb 2 22:20:03.243: INFO: stderr: "" Feb 2 22:20:03.243: INFO: stdout: "" Feb 2 22:20:03.244: INFO: Last 100 log lines of ss2-2: Feb 2 22:20:03.244: INFO: Deleting all statefulset in ns statefulset-4695 Feb 2 22:20:03.293: INFO: Scaling statefulset ss2 to 0 Feb 2 22:30:03.547: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 22:30:03.589: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:30:13.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:30:23.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:30:33.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:30:43.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:30:53.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:31:03.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:31:13.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:31:23.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:31:33.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:31:43.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:31:53.641: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:32:03.655: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:32:13.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:32:23.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:32:33.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:32:43.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:32:53.640: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:33:03.634: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:33:13.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:33:23.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:33:33.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:33:43.634: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:33:53.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:34:03.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:34:13.638: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:34:23.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:34:33.634: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:34:43.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:34:53.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:35:03.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:35:13.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:35:23.634: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:35:33.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:35:43.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:35:53.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:36:03.634: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:36:13.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:36:23.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:36:33.637: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:36:43.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:36:53.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:37:03.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:37:13.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:37:23.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:37:33.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:37:43.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:37:53.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:38:03.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:38:13.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:38:23.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:38:33.637: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:38:43.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:38:53.633: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:39:03.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:39:13.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:39:23.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:39:33.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:39:43.635: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:39:53.636: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:40:03.641: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:40:03.709: INFO: Waiting for stateful set status.replicas to become 0, currently 2 Feb 2 22:40:03.709: FAIL: Failed waiting for stateful set status.replicas updated to 0: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7aa8ff8, 0xc003bd1200}, {0xc004243120, 0x10}) test/e2e/framework/statefulset/rest.go:86 +0x339 k8s.io/kubernetes/test/e2e/apps.glob..func9.2.2() test/e2e/apps/statefulset.go:127 +0x112 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e5201?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000d124e0, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:188 �[1mSTEP�[0m: Collecting events from namespace "statefulset-4695". �[1mSTEP�[0m: Found 32 events. Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:09 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:09 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-4695/ss2-0 to e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:10 +0000 UTC - event for ss2-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-gbl9x" : failed to sync configmap cache: timed out waiting for the condition Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:12 +0000 UTC - event for ss2-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:12 +0000 UTC - event for ss2-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Created: Created container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:14 +0000 UTC - event for ss2-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Started: Started container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:15 +0000 UTC - event for ss2-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Unhealthy: Readiness probe failed: Get "http://10.64.3.88:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:17 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:17 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-4695/ss2-1 to e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:19 +0000 UTC - event for ss2-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Created: Created container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:19 +0000 UTC - event for ss2-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:24 +0000 UTC - event for ss2-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Started: Started container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:26 +0000 UTC - event for ss2-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Unhealthy: Readiness probe failed: Get "http://10.64.1.28:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:27 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:27 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-4695/ss2-2 to e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:29 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Created: Created container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:29 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:31 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Started: Started container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:33 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Unhealthy: Readiness probe failed: Get "http://10.64.2.60:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:40 +0000 UTC - event for ss2-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 404 Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:51 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-2 in StatefulSet ss2 successful Feb 2 22:40:03.763: INFO: At 2023-02-02 22:09:51 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Killing: Stopping container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:10:21 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-4695/ss2-2 to e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:40:03.763: INFO: At 2023-02-02 22:10:24 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.39-2" already present on machine Feb 2 22:40:03.763: INFO: At 2023-02-02 22:10:24 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Created: Created container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:10:26 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Started: Started container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:10:27 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Unhealthy: Readiness probe failed: Get "http://10.64.2.74:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 2 22:40:03.763: INFO: At 2023-02-02 22:10:28 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-1 in StatefulSet ss2 successful Feb 2 22:40:03.763: INFO: At 2023-02-02 22:10:28 +0000 UTC - event for ss2-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Killing: Stopping container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:12:49 +0000 UTC - event for ss2-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} FailedKillPod: error killing pod: failed to "KillPodSandbox" for "50945517-7e5d-4d57-adca-bd2df1c7b8f3" with KillPodSandboxError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded" Feb 2 22:40:03.763: INFO: At 2023-02-02 22:20:03 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Killing: Stopping container webserver Feb 2 22:40:03.763: INFO: At 2023-02-02 22:20:05 +0000 UTC - event for ss2-2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Unhealthy: Readiness probe failed: Get "http://10.64.2.74:80/index.html": read tcp 10.64.2.2:59121->10.64.2.74:80: wsarecv: An existing connection was forcibly closed by the remote host. Feb 2 22:40:03.807: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 22:40:03.807: INFO: ss2-0 e2e-7d89e54d79-37bac-windows-node-group-q21f Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:09:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:09:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:09:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:09:09 +0000 UTC }] Feb 2 22:40:03.807: INFO: ss2-1 e2e-7d89e54d79-37bac-windows-node-group-k0qm Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:09:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:10:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:10:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:09:17 +0000 UTC }] Feb 2 22:40:03.807: INFO: Feb 2 22:40:03.996: INFO: Logging node info for node e2e-7d89e54d79-37bac-master Feb 2 22:40:04.039: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-master b403d958-aef5-4e5e-9b07-9812dc3e7d8b 22191 0 2023-02-02 21:23:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kubelet Update v1 2023-02-02 21:23:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3864313856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3602169856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:35:25 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:35:25 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:35:25 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:35:25 +0000 UTC,LastTransitionTime:2023-02-02 21:23:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.98.204,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e8df098cf83a91bc3c7c2a97ba5a41e9,SystemUUID:e8df098c-f83a-91bc-3c7c-2a97ba5a41e9,BootID:59df2086-4103-4b38-9939-c916841efb98,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:131733971,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:121342787,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:52751170,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:be60ef505fc80879eeb5d8bf3ad8bb1146b395afc2394584645e99431806c26c gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.12.0],SizeBytes:32705362,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:d863f7fd0da4392b9753dc6c9195a658e80d70e0be8c9adb410d77cf20b75c76 registry.k8s.io/kas-network-proxy/proxy-server:v0.0.35],SizeBytes:21985251,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:40:04.040: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-master Feb 2 22:40:04.081: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-master Feb 2 22:40:04.140: INFO: kube-controller-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.140: INFO: Container kube-controller-manager ready: true, restart count 2 Feb 2 22:40:04.140: INFO: etcd-server-events-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.140: INFO: Container etcd-container ready: true, restart count 0 Feb 2 22:40:04.140: INFO: kube-addon-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.140: INFO: Container kube-addon-manager ready: true, restart count 0 Feb 2 22:40:04.140: INFO: metadata-proxy-v0.1-fmxnz started at 2023-02-02 21:23:46 +0000 UTC (0+2 container statuses recorded) Feb 2 22:40:04.140: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 22:40:04.140: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 22:40:04.140: INFO: konnectivity-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.140: INFO: Container konnectivity-server-container ready: true, restart count 0 Feb 2 22:40:04.140: INFO: kube-apiserver-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.140: INFO: Container kube-apiserver ready: true, restart count 0 Feb 2 22:40:04.140: INFO: kube-scheduler-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.140: INFO: Container kube-scheduler ready: true, restart count 0 Feb 2 22:40:04.140: INFO: etcd-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.140: INFO: Container etcd-container ready: true, restart count 0 Feb 2 22:40:04.140: INFO: l7-lb-controller-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.140: INFO: Container l7-lb-controller ready: true, restart count 3 Feb 2 22:40:04.344: INFO: Latency metrics for node e2e-7d89e54d79-37bac-master Feb 2 22:40:04.344: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:40:04.388: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-1vp1 d81fe224-05dd-48a7-9693-e2f2826a1b97 22649 0 2023-02-02 21:23:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-1vp1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-02-02 21:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.5.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-02-02 21:23:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-1vp1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.5.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:38:24 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:38:24 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:38:24 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:38:24 +0000 UTC,LastTransitionTime:2023-02-02 21:23:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.7,},NodeAddress{Type:ExternalIP,Address:35.197.102.154,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ade76a2ad94b4b90c3b7ba811704d98c,SystemUUID:ade76a2a-d94b-4b90-c3b7-ba811704d98c,BootID:29452487-f38a-42cd-8605-aecb73730dd9,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0],SizeBytes:18952261,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:40:04.388: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:40:04.431: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:40:04.507: INFO: metadata-proxy-v0.1-kmxp5 started at 2023-02-02 21:23:39 +0000 UTC (0+2 container statuses recorded) Feb 2 22:40:04.507: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 22:40:04.507: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 22:40:04.507: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-1vp1 started at 2023-02-02 21:23:38 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.507: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 22:40:04.507: INFO: coredns-8c79ffd8b-rd5tr started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.507: INFO: Container coredns ready: true, restart count 0 Feb 2 22:40:04.507: INFO: l7-default-backend-8667cd4ffc-pgmnb started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.507: INFO: Container default-http-backend ready: true, restart count 0 Feb 2 22:40:04.507: INFO: volume-snapshot-controller-0 started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.507: INFO: Container volume-snapshot-controller ready: true, restart count 0 Feb 2 22:40:04.507: INFO: konnectivity-agent-mn5mq started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.507: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 22:40:04.507: INFO: kube-dns-autoscaler-596f6cf79f-v76jk started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.507: INFO: Container autoscaler ready: true, restart count 0 Feb 2 22:40:04.696: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:40:04.697: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:40:04.741: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-fhnf 9c0dcb7a-8a6b-4535-afe8-b62bf19173f7 22648 0 2023-02-02 21:23:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-fhnf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-02-02 21:23:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-fhnf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 22:38:53 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:39 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:37:43 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:37:43 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:37:43 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:37:43 +0000 UTC,LastTransitionTime:2023-02-02 21:23:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.6,},NodeAddress{Type:ExternalIP,Address:34.127.30.111,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8abcabc408b3bd715147992f3d5a5854,SystemUUID:8abcabc4-08b3-bd71-5147-992f3d5a5854,BootID:8d0324c2-172b-4c43-81ee-83b6878e11ee,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:40:04.741: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:40:04.785: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:40:04.850: INFO: metadata-proxy-v0.1-xl4fd started at 2023-02-02 21:23:40 +0000 UTC (0+2 container statuses recorded) Feb 2 22:40:04.850: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 22:40:04.850: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 22:40:04.850: INFO: konnectivity-agent-k667p started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.850: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 22:40:04.850: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-fhnf started at 2023-02-02 21:23:39 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.850: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 22:40:04.850: INFO: metrics-server-v0.5.2-6d6794c8cd-9vklc started at 2023-02-02 21:24:01 +0000 UTC (0+2 container statuses recorded) Feb 2 22:40:04.850: INFO: Container metrics-server ready: true, restart count 0 Feb 2 22:40:04.850: INFO: Container metrics-server-nanny ready: true, restart count 0 Feb 2 22:40:04.850: INFO: coredns-8c79ffd8b-4v5p9 started at 2023-02-02 21:23:54 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:04.850: INFO: Container coredns ready: true, restart count 0 Feb 2 22:40:05.027: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:40:05.028: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:40:05.070: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-jllf 1f67dcc8-9253-4e93-8b90-78810a8df879 22505 0 2023-02-02 21:29:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-jllf kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-jllf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:37:49 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:37:49 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:37:49 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:37:49 +0000 UTC,LastTransitionTime:2023-02-02 21:29:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.75.252,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-jllf,SystemUUID:23C88569-8B16-0615-6BFF-BB819EADA98A,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:203784192,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:40:05.070: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:40:05.112: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:40:05.371: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:40:05.371: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:40:05.414: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-k0qm 91cb59e3-df60-4007-bdc1-bb197e591e43 22694 0 2023-02-02 21:29:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-k0qm kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2023-02-02 21:29:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:30:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-k0qm,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:24 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.1.208,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-k0qm,SystemUUID:FC53E984-3141-4AB0-99D2-83726BB3072F,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/windows-nanoserver@sha256:fb9b25770487567c02bf90dd3edea7917323556d1b7ba81ec042ffd5f9effeae gcr.io/authenticated-image-pulling/windows-nanoserver:v1],SizeBytes:101148102,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:40:05.414: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:40:05.455: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:40:05.502: INFO: update-demo-nautilus-tcxbs started at 2023-02-02 22:14:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:05.502: INFO: Container update-demo ready: false, restart count 2 Feb 2 22:40:05.502: INFO: ss2-1 started at 2023-02-02 22:09:17 +0000 UTC (0+1 container statuses recorded) Feb 2 22:40:05.502: INFO: Container webserver ready: false, restart count 0 Feb 2 22:42:05.600: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:42:05.645: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-q21f eef3ae47-aa0d-4af8-87e8-4c4de04eace2 22454 0 2023-02-02 21:29:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-q21f kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-q21f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:14 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:37:26 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:37:26 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:37:26 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:37:26 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.230.207,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-q21f,SystemUUID:B1BBE679-4138-5169-4472-E3B13289F193,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:203784192,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:42:05.645: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:42:05.686: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:42:05.769: INFO: ss2-0 started at 2023-02-02 22:09:09 +0000 UTC (0+1 container statuses recorded) Feb 2 22:42:05.769: INFO: Container webserver ready: true, restart count 0 Feb 2 22:42:06.516: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:42:06.516: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-4695" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\screate\sand\sstop\sa\sreplication\scontroller\s\s\[Conformance\]$'
test/e2e/framework/framework.go:652 Feb 2 22:19:28.577: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state test/e2e/kubectl/kubectl.go:315from junit_04.xml
[BeforeEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 22:14:22.127: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:245 [BeforeEach] Update Demo test/e2e/kubectl/kubectl.go:297 [It] should create and stop a replication controller [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating a replication controller Feb 2 22:14:22.420: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 create -f -' Feb 2 22:14:23.903: INFO: stderr: "" Feb 2 22:14:23.903: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Feb 2 22:14:23.903: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:14:24.109: INFO: stderr: "" Feb 2 22:14:24.109: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:14:24.109: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:24.308: INFO: stderr: "" Feb 2 22:14:24.308: INFO: stdout: "" Feb 2 22:14:24.308: INFO: update-demo-nautilus-gqsht is created but not running Feb 2 22:14:29.310: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:14:29.516: INFO: stderr: "" Feb 2 22:14:29.517: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:14:29.517: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:29.719: INFO: stderr: "" Feb 2 22:14:29.719: INFO: stdout: "true" Feb 2 22:14:29.719: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:14:29.919: INFO: stderr: "" Feb 2 22:14:29.919: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:14:29.919: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:14:31.055: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:14:31.055: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:14:31.055: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:14:31.055: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:31.273: INFO: stderr: "" Feb 2 22:14:31.274: INFO: stdout: "" Feb 2 22:14:31.274: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:14:36.274: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:14:36.477: INFO: stderr: "" Feb 2 22:14:36.477: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:14:36.477: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:36.676: INFO: stderr: "" Feb 2 22:14:36.676: INFO: stdout: "true" Feb 2 22:14:36.677: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:14:36.870: INFO: stderr: "" Feb 2 22:14:36.870: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:14:36.870: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:14:36.917: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:14:36.917: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:14:36.917: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:14:36.918: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:37.115: INFO: stderr: "" Feb 2 22:14:37.115: INFO: stdout: "" Feb 2 22:14:37.115: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:14:42.116: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:14:42.316: INFO: stderr: "" Feb 2 22:14:42.316: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:14:42.316: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:42.511: INFO: stderr: "" Feb 2 22:14:42.511: INFO: stdout: "true" Feb 2 22:14:42.511: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:14:42.708: INFO: stderr: "" Feb 2 22:14:42.709: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:14:42.709: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:14:42.755: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:14:42.756: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:14:42.756: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:14:42.756: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:42.953: INFO: stderr: "" Feb 2 22:14:42.953: INFO: stdout: "" Feb 2 22:14:42.953: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:14:47.954: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:14:48.153: INFO: stderr: "" Feb 2 22:14:48.153: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:14:48.153: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:48.349: INFO: stderr: "" Feb 2 22:14:48.350: INFO: stdout: "true" Feb 2 22:14:48.350: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:14:48.623: INFO: stderr: "" Feb 2 22:14:48.623: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:14:48.623: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:14:48.672: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:14:48.672: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:14:48.672: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:14:48.672: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:48.868: INFO: stderr: "" Feb 2 22:14:48.868: INFO: stdout: "" Feb 2 22:14:48.868: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:14:53.868: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:14:54.068: INFO: stderr: "" Feb 2 22:14:54.068: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:14:54.068: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:54.265: INFO: stderr: "" Feb 2 22:14:54.265: INFO: stdout: "true" Feb 2 22:14:54.265: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:14:54.459: INFO: stderr: "" Feb 2 22:14:54.459: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:14:54.459: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:14:54.515: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:14:54.515: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:14:54.515: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:14:54.515: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:14:54.714: INFO: stderr: "" Feb 2 22:14:54.714: INFO: stdout: "" Feb 2 22:14:54.714: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:14:59.714: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:14:59.915: INFO: stderr: "" Feb 2 22:14:59.915: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:14:59.915: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:00.110: INFO: stderr: "" Feb 2 22:15:00.110: INFO: stdout: "true" Feb 2 22:15:00.110: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:00.307: INFO: stderr: "" Feb 2 22:15:00.307: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:00.307: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:00.356: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:00.356: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:00.356: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:00.356: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:00.557: INFO: stderr: "" Feb 2 22:15:00.557: INFO: stdout: "" Feb 2 22:15:00.557: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:15:05.558: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:15:05.756: INFO: stderr: "" Feb 2 22:15:05.756: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:15:05.756: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:05.953: INFO: stderr: "" Feb 2 22:15:05.953: INFO: stdout: "true" Feb 2 22:15:05.953: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:06.147: INFO: stderr: "" Feb 2 22:15:06.148: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:06.148: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:06.195: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:06.195: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:06.195: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:06.195: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:06.392: INFO: stderr: "" Feb 2 22:15:06.392: INFO: stdout: "" Feb 2 22:15:06.392: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:15:11.393: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:15:11.596: INFO: stderr: "" Feb 2 22:15:11.596: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:15:11.596: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:11.792: INFO: stderr: "" Feb 2 22:15:11.792: INFO: stdout: "true" Feb 2 22:15:11.792: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:11.987: INFO: stderr: "" Feb 2 22:15:11.987: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:11.987: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:12.035: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:12.035: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:12.035: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:12.035: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:12.230: INFO: stderr: "" Feb 2 22:15:12.230: INFO: stdout: "" Feb 2 22:15:12.230: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:15:17.230: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:15:17.431: INFO: stderr: "" Feb 2 22:15:17.431: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:15:17.431: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:17.626: INFO: stderr: "" Feb 2 22:15:17.627: INFO: stdout: "true" Feb 2 22:15:17.627: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:17.827: INFO: stderr: "" Feb 2 22:15:17.827: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:17.827: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:17.876: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:17.876: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:17.876: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:17.876: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:18.073: INFO: stderr: "" Feb 2 22:15:18.073: INFO: stdout: "" Feb 2 22:15:18.073: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:15:23.073: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:15:23.273: INFO: stderr: "" Feb 2 22:15:23.273: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:15:23.273: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:23.472: INFO: stderr: "" Feb 2 22:15:23.472: INFO: stdout: "true" Feb 2 22:15:23.472: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:23.669: INFO: stderr: "" Feb 2 22:15:23.669: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:23.669: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:23.717: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:23.717: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:23.717: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:23.717: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:23.913: INFO: stderr: "" Feb 2 22:15:23.913: INFO: stdout: "" Feb 2 22:15:23.913: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:15:28.917: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:15:29.116: INFO: stderr: "" Feb 2 22:15:29.116: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:15:29.117: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:29.318: INFO: stderr: "" Feb 2 22:15:29.318: INFO: stdout: "true" Feb 2 22:15:29.318: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:29.519: INFO: stderr: "" Feb 2 22:15:29.519: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:29.519: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:29.569: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:29.569: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:29.569: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:29.569: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:29.770: INFO: stderr: "" Feb 2 22:15:29.770: INFO: stdout: "" Feb 2 22:15:29.770: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:15:34.770: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:15:34.967: INFO: stderr: "" Feb 2 22:15:34.967: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:15:34.967: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:35.163: INFO: stderr: "" Feb 2 22:15:35.163: INFO: stdout: "true" Feb 2 22:15:35.163: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:35.357: INFO: stderr: "" Feb 2 22:15:35.357: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:35.357: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:35.405: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:35.405: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:35.405: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:35.405: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:35.605: INFO: stderr: "" Feb 2 22:15:35.605: INFO: stdout: "" Feb 2 22:15:35.605: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:15:40.606: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:15:40.805: INFO: stderr: "" Feb 2 22:15:40.805: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:15:40.805: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:41.000: INFO: stderr: "" Feb 2 22:15:41.000: INFO: stdout: "true" Feb 2 22:15:41.000: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:41.193: INFO: stderr: "" Feb 2 22:15:41.193: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:41.193: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:41.241: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:41.241: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:41.241: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:41.241: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:41.438: INFO: stderr: "" Feb 2 22:15:41.438: INFO: stdout: "" Feb 2 22:15:41.438: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:15:46.439: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:15:46.634: INFO: stderr: "" Feb 2 22:15:46.634: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:15:46.634: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:46.832: INFO: stderr: "" Feb 2 22:15:46.832: INFO: stdout: "true" Feb 2 22:15:46.832: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:47.026: INFO: stderr: "" Feb 2 22:15:47.026: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:47.026: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:47.072: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:47.073: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:47.073: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:47.073: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:47.269: INFO: stderr: "" Feb 2 22:15:47.269: INFO: stdout: "" Feb 2 22:15:47.269: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:15:52.269: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:15:52.468: INFO: stderr: "" Feb 2 22:15:52.468: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:15:52.468: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:52.661: INFO: stderr: "" Feb 2 22:15:52.661: INFO: stdout: "true" Feb 2 22:15:52.661: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:52.857: INFO: stderr: "" Feb 2 22:15:52.857: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:52.857: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:52.903: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:52.903: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:52.903: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:52.903: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:53.096: INFO: stderr: "" Feb 2 22:15:53.096: INFO: stdout: "" Feb 2 22:15:53.096: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:15:58.099: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:15:58.303: INFO: stderr: "" Feb 2 22:15:58.303: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:15:58.303: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:58.498: INFO: stderr: "" Feb 2 22:15:58.498: INFO: stdout: "true" Feb 2 22:15:58.498: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:15:58.696: INFO: stderr: "" Feb 2 22:15:58.696: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:15:58.696: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:15:58.743: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:15:58.743: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:15:58.743: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:15:58.743: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:15:58.945: INFO: stderr: "" Feb 2 22:15:58.946: INFO: stdout: "" Feb 2 22:15:58.946: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:16:03.949: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:16:04.148: INFO: stderr: "" Feb 2 22:16:04.148: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:16:04.148: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:04.351: INFO: stderr: "" Feb 2 22:16:04.351: INFO: stdout: "true" Feb 2 22:16:04.351: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:16:04.547: INFO: stderr: "" Feb 2 22:16:04.547: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:16:04.547: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:16:04.603: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:16:04.603: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:16:04.603: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:16:04.604: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:04.819: INFO: stderr: "" Feb 2 22:16:04.819: INFO: stdout: "" Feb 2 22:16:04.819: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:16:09.822: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:16:10.020: INFO: stderr: "" Feb 2 22:16:10.020: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:16:10.020: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:10.213: INFO: stderr: "" Feb 2 22:16:10.213: INFO: stdout: "true" Feb 2 22:16:10.213: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:16:10.413: INFO: stderr: "" Feb 2 22:16:10.413: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:16:10.413: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:16:10.460: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:16:10.460: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:16:10.460: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:16:10.460: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:10.677: INFO: stderr: "" Feb 2 22:16:10.677: INFO: stdout: "" Feb 2 22:16:10.677: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:16:15.677: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:16:15.873: INFO: stderr: "" Feb 2 22:16:15.873: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:16:15.873: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:16.067: INFO: stderr: "" Feb 2 22:16:16.067: INFO: stdout: "true" Feb 2 22:16:16.067: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:16:16.267: INFO: stderr: "" Feb 2 22:16:16.267: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:16:16.267: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:16:16.315: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:16:16.315: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:16:16.315: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:16:16.316: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:16.510: INFO: stderr: "" Feb 2 22:16:16.510: INFO: stdout: "" Feb 2 22:16:16.510: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:16:21.510: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:16:21.708: INFO: stderr: "" Feb 2 22:16:21.708: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:16:21.708: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:21.899: INFO: stderr: "" Feb 2 22:16:21.900: INFO: stdout: "true" Feb 2 22:16:21.900: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:16:22.093: INFO: stderr: "" Feb 2 22:16:22.093: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:16:22.093: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:16:22.140: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:16:22.140: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:16:22.140: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:16:22.140: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:22.333: INFO: stderr: "" Feb 2 22:16:22.333: INFO: stdout: "" Feb 2 22:16:22.333: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:16:27.335: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:16:27.529: INFO: stderr: "" Feb 2 22:16:27.529: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:16:27.529: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:27.724: INFO: stderr: "" Feb 2 22:16:27.724: INFO: stdout: "true" Feb 2 22:16:27.724: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:16:27.918: INFO: stderr: "" Feb 2 22:16:27.918: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:16:27.918: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:16:27.967: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:16:27.967: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:16:27.967: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:16:27.967: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:28.164: INFO: stderr: "" Feb 2 22:16:28.164: INFO: stdout: "" Feb 2 22:16:28.164: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:16:33.166: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:16:33.364: INFO: stderr: "" Feb 2 22:16:33.364: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:16:33.364: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:33.563: INFO: stderr: "" Feb 2 22:16:33.563: INFO: stdout: "true" Feb 2 22:16:33.563: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:16:33.762: INFO: stderr: "" Feb 2 22:16:33.762: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:16:33.763: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:16:33.811: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:16:33.811: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:16:33.811: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:16:33.811: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:34.006: INFO: stderr: "" Feb 2 22:16:34.006: INFO: stdout: "" Feb 2 22:16:34.006: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:16:39.008: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:16:39.217: INFO: stderr: "" Feb 2 22:16:39.217: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:16:39.217: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:39.422: INFO: stderr: "" Feb 2 22:16:39.422: INFO: stdout: "true" Feb 2 22:16:39.422: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:16:39.628: INFO: stderr: "" Feb 2 22:16:39.628: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:16:39.628: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:16:39.674: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:16:39.674: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:16:39.674: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:16:39.674: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:39.872: INFO: stderr: "" Feb 2 22:16:39.872: INFO: stdout: "" Feb 2 22:16:39.872: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:16:44.872: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:16:45.071: INFO: stderr: "" Feb 2 22:16:45.071: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:16:45.071: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:45.267: INFO: stderr: "" Feb 2 22:16:45.267: INFO: stdout: "true" Feb 2 22:16:45.267: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:16:45.461: INFO: stderr: "" Feb 2 22:16:45.461: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:16:45.461: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:16:45.509: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:16:45.509: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:16:45.509: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:16:45.509: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:45.710: INFO: stderr: "" Feb 2 22:16:45.710: INFO: stdout: "" Feb 2 22:16:45.710: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:16:50.711: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:16:50.915: INFO: stderr: "" Feb 2 22:16:50.915: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:16:50.915: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:51.117: INFO: stderr: "" Feb 2 22:16:51.117: INFO: stdout: "true" Feb 2 22:16:51.117: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:16:51.314: INFO: stderr: "" Feb 2 22:16:51.314: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:16:51.314: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:16:51.361: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:16:51.361: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:16:51.361: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:16:51.362: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:51.559: INFO: stderr: "" Feb 2 22:16:51.559: INFO: stdout: "" Feb 2 22:16:51.559: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:16:56.559: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:16:56.762: INFO: stderr: "" Feb 2 22:16:56.763: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:16:56.763: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:56.961: INFO: stderr: "" Feb 2 22:16:56.961: INFO: stdout: "true" Feb 2 22:16:56.961: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:16:57.157: INFO: stderr: "" Feb 2 22:16:57.157: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:16:57.157: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:16:57.205: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:16:57.205: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:16:57.205: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:16:57.205: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:16:57.401: INFO: stderr: "" Feb 2 22:16:57.401: INFO: stdout: "" Feb 2 22:16:57.401: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:17:02.401: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:17:02.598: INFO: stderr: "" Feb 2 22:17:02.598: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:17:02.598: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:02.793: INFO: stderr: "" Feb 2 22:17:02.793: INFO: stdout: "true" Feb 2 22:17:02.793: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:17:02.987: INFO: stderr: "" Feb 2 22:17:02.987: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:17:02.987: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:17:03.035: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:17:03.035: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:17:03.035: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:17:03.035: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:03.233: INFO: stderr: "" Feb 2 22:17:03.233: INFO: stdout: "" Feb 2 22:17:03.233: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:17:08.235: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:17:08.434: INFO: stderr: "" Feb 2 22:17:08.434: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:17:08.434: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:08.634: INFO: stderr: "" Feb 2 22:17:08.634: INFO: stdout: "true" Feb 2 22:17:08.634: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:17:08.836: INFO: stderr: "" Feb 2 22:17:08.836: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:17:08.836: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:17:08.882: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:17:08.882: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:17:08.882: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:17:08.882: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:09.081: INFO: stderr: "" Feb 2 22:17:09.081: INFO: stdout: "" Feb 2 22:17:09.081: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:17:14.082: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:17:14.287: INFO: stderr: "" Feb 2 22:17:14.287: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:17:14.287: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:14.486: INFO: stderr: "" Feb 2 22:17:14.486: INFO: stdout: "true" Feb 2 22:17:14.487: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:17:14.684: INFO: stderr: "" Feb 2 22:17:14.684: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:17:14.684: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:17:14.732: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:17:14.732: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:17:14.732: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:17:14.732: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:14.925: INFO: stderr: "" Feb 2 22:17:14.925: INFO: stdout: "" Feb 2 22:17:14.925: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:17:19.929: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:17:20.126: INFO: stderr: "" Feb 2 22:17:20.126: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:17:20.126: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:20.325: INFO: stderr: "" Feb 2 22:17:20.325: INFO: stdout: "true" Feb 2 22:17:20.325: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:17:20.521: INFO: stderr: "" Feb 2 22:17:20.521: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:17:20.521: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:17:20.568: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:17:20.568: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:17:20.568: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:17:20.568: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:20.769: INFO: stderr: "" Feb 2 22:17:20.769: INFO: stdout: "" Feb 2 22:17:20.769: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:17:25.770: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:17:25.973: INFO: stderr: "" Feb 2 22:17:25.973: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:17:25.973: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:26.170: INFO: stderr: "" Feb 2 22:17:26.171: INFO: stdout: "true" Feb 2 22:17:26.171: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:17:26.364: INFO: stderr: "" Feb 2 22:17:26.364: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:17:26.364: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:17:26.412: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:17:26.412: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:17:26.412: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:17:26.412: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:26.611: INFO: stderr: "" Feb 2 22:17:26.611: INFO: stdout: "" Feb 2 22:17:26.611: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:17:31.612: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:17:31.811: INFO: stderr: "" Feb 2 22:17:31.811: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:17:31.811: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:32.009: INFO: stderr: "" Feb 2 22:17:32.009: INFO: stdout: "true" Feb 2 22:17:32.009: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:17:32.208: INFO: stderr: "" Feb 2 22:17:32.209: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:17:32.209: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:17:32.255: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:17:32.255: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:17:32.255: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:17:32.255: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:32.446: INFO: stderr: "" Feb 2 22:17:32.446: INFO: stdout: "" Feb 2 22:17:32.446: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:17:37.447: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:17:37.650: INFO: stderr: "" Feb 2 22:17:37.651: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:17:37.651: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:37.876: INFO: stderr: "" Feb 2 22:17:37.876: INFO: stdout: "true" Feb 2 22:17:37.876: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:17:38.072: INFO: stderr: "" Feb 2 22:17:38.072: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:17:38.072: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:17:38.121: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:17:38.121: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:17:38.121: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:17:38.121: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:38.315: INFO: stderr: "" Feb 2 22:17:38.315: INFO: stdout: "" Feb 2 22:17:38.315: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:17:43.316: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:17:43.512: INFO: stderr: "" Feb 2 22:17:43.512: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:17:43.512: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:43.711: INFO: stderr: "" Feb 2 22:17:43.711: INFO: stdout: "true" Feb 2 22:17:43.711: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:17:43.906: INFO: stderr: "" Feb 2 22:17:43.906: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:17:43.906: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:17:43.952: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:17:43.952: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:17:43.952: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:17:43.952: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:44.146: INFO: stderr: "" Feb 2 22:17:44.146: INFO: stdout: "" Feb 2 22:17:44.146: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:17:49.147: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:17:49.357: INFO: stderr: "" Feb 2 22:17:49.357: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:17:49.357: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:49.555: INFO: stderr: "" Feb 2 22:17:49.555: INFO: stdout: "true" Feb 2 22:17:49.556: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:17:49.754: INFO: stderr: "" Feb 2 22:17:49.754: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:17:49.754: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:17:49.801: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:17:49.801: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:17:49.801: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:17:49.801: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:49.998: INFO: stderr: "" Feb 2 22:17:49.998: INFO: stdout: "" Feb 2 22:17:49.998: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:17:54.998: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:17:55.200: INFO: stderr: "" Feb 2 22:17:55.200: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:17:55.200: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:55.398: INFO: stderr: "" Feb 2 22:17:55.398: INFO: stdout: "true" Feb 2 22:17:55.398: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:17:55.597: INFO: stderr: "" Feb 2 22:17:55.598: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:17:55.598: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:17:55.647: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:17:55.647: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:17:55.647: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:17:55.647: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:17:55.845: INFO: stderr: "" Feb 2 22:17:55.845: INFO: stdout: "" Feb 2 22:17:55.845: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:00.845: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:01.053: INFO: stderr: "" Feb 2 22:18:01.054: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:01.054: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:01.250: INFO: stderr: "" Feb 2 22:18:01.250: INFO: stdout: "true" Feb 2 22:18:01.250: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:01.444: INFO: stderr: "" Feb 2 22:18:01.444: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:01.444: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:01.491: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:01.491: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:01.491: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:01.491: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:01.692: INFO: stderr: "" Feb 2 22:18:01.692: INFO: stdout: "" Feb 2 22:18:01.692: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:06.693: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:06.894: INFO: stderr: "" Feb 2 22:18:06.894: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:06.894: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:07.089: INFO: stderr: "" Feb 2 22:18:07.089: INFO: stdout: "true" Feb 2 22:18:07.089: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:07.287: INFO: stderr: "" Feb 2 22:18:07.287: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:07.287: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:07.334: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:07.334: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:07.334: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:07.334: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:07.532: INFO: stderr: "" Feb 2 22:18:07.532: INFO: stdout: "" Feb 2 22:18:07.532: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:12.533: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:12.733: INFO: stderr: "" Feb 2 22:18:12.733: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:12.733: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:12.933: INFO: stderr: "" Feb 2 22:18:12.933: INFO: stdout: "true" Feb 2 22:18:12.933: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:13.129: INFO: stderr: "" Feb 2 22:18:13.129: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:13.129: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:13.176: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:13.176: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:13.176: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:13.176: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:13.371: INFO: stderr: "" Feb 2 22:18:13.371: INFO: stdout: "" Feb 2 22:18:13.371: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:18.374: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:18.571: INFO: stderr: "" Feb 2 22:18:18.571: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:18.571: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:18.769: INFO: stderr: "" Feb 2 22:18:18.769: INFO: stdout: "true" Feb 2 22:18:18.769: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:18.970: INFO: stderr: "" Feb 2 22:18:18.970: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:18.970: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:19.016: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:19.016: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:19.016: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:19.016: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:19.216: INFO: stderr: "" Feb 2 22:18:19.216: INFO: stdout: "" Feb 2 22:18:19.216: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:24.219: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:24.421: INFO: stderr: "" Feb 2 22:18:24.421: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:24.421: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:24.621: INFO: stderr: "" Feb 2 22:18:24.621: INFO: stdout: "true" Feb 2 22:18:24.621: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:24.822: INFO: stderr: "" Feb 2 22:18:24.822: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:24.822: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:24.868: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:24.868: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:24.868: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:24.868: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:25.067: INFO: stderr: "" Feb 2 22:18:25.067: INFO: stdout: "" Feb 2 22:18:25.067: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:30.072: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:30.274: INFO: stderr: "" Feb 2 22:18:30.274: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:30.274: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:30.475: INFO: stderr: "" Feb 2 22:18:30.475: INFO: stdout: "true" Feb 2 22:18:30.475: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:30.674: INFO: stderr: "" Feb 2 22:18:30.674: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:30.674: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:30.724: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:30.724: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:30.724: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:30.724: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:30.919: INFO: stderr: "" Feb 2 22:18:30.919: INFO: stdout: "" Feb 2 22:18:30.919: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:35.920: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:36.120: INFO: stderr: "" Feb 2 22:18:36.120: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:36.120: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:36.316: INFO: stderr: "" Feb 2 22:18:36.317: INFO: stdout: "true" Feb 2 22:18:36.317: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:36.509: INFO: stderr: "" Feb 2 22:18:36.509: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:36.509: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:36.556: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:36.556: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:36.556: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:36.556: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:36.750: INFO: stderr: "" Feb 2 22:18:36.750: INFO: stdout: "" Feb 2 22:18:36.750: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:41.751: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:41.952: INFO: stderr: "" Feb 2 22:18:41.952: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:41.952: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:42.149: INFO: stderr: "" Feb 2 22:18:42.149: INFO: stdout: "true" Feb 2 22:18:42.149: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:42.344: INFO: stderr: "" Feb 2 22:18:42.344: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:42.344: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:42.398: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:42.398: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:42.398: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:42.398: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:42.596: INFO: stderr: "" Feb 2 22:18:42.596: INFO: stdout: "" Feb 2 22:18:42.596: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:47.597: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:47.800: INFO: stderr: "" Feb 2 22:18:47.800: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:47.800: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:48.001: INFO: stderr: "" Feb 2 22:18:48.001: INFO: stdout: "true" Feb 2 22:18:48.001: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:48.202: INFO: stderr: "" Feb 2 22:18:48.202: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:48.202: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:48.263: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:48.264: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:48.264: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:48.264: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:48.465: INFO: stderr: "" Feb 2 22:18:48.465: INFO: stdout: "" Feb 2 22:18:48.465: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:53.467: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:53.664: INFO: stderr: "" Feb 2 22:18:53.664: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:53.664: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:53.858: INFO: stderr: "" Feb 2 22:18:53.858: INFO: stdout: "true" Feb 2 22:18:53.859: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:54.059: INFO: stderr: "" Feb 2 22:18:54.059: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:54.059: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:54.106: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:54.106: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:54.106: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:54.106: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:54.303: INFO: stderr: "" Feb 2 22:18:54.303: INFO: stdout: "" Feb 2 22:18:54.303: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:18:59.307: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:18:59.508: INFO: stderr: "" Feb 2 22:18:59.508: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:18:59.508: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:18:59.707: INFO: stderr: "" Feb 2 22:18:59.707: INFO: stdout: "true" Feb 2 22:18:59.707: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:18:59.910: INFO: stderr: "" Feb 2 22:18:59.910: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:18:59.910: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:18:59.957: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:18:59.957: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:18:59.957: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:18:59.958: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:19:00.154: INFO: stderr: "" Feb 2 22:19:00.154: INFO: stdout: "" Feb 2 22:19:00.154: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:19:05.155: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:19:05.358: INFO: stderr: "" Feb 2 22:19:05.358: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:19:05.358: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:19:05.559: INFO: stderr: "" Feb 2 22:19:05.559: INFO: stdout: "true" Feb 2 22:19:05.560: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:19:05.756: INFO: stderr: "" Feb 2 22:19:05.756: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:19:05.756: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:19:05.806: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:19:05.806: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:19:05.806: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:19:05.806: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:19:06.007: INFO: stderr: "" Feb 2 22:19:06.007: INFO: stdout: "" Feb 2 22:19:06.007: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:19:11.007: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:19:11.250: INFO: stderr: "" Feb 2 22:19:11.250: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:19:11.250: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:19:11.444: INFO: stderr: "" Feb 2 22:19:11.444: INFO: stdout: "true" Feb 2 22:19:11.444: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:19:11.640: INFO: stderr: "" Feb 2 22:19:11.640: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:19:11.640: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:19:11.688: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:19:11.688: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:19:11.688: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:19:11.688: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:19:11.885: INFO: stderr: "" Feb 2 22:19:11.885: INFO: stdout: "" Feb 2 22:19:11.885: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:19:16.886: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:19:17.084: INFO: stderr: "" Feb 2 22:19:17.084: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:19:17.084: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:19:17.282: INFO: stderr: "" Feb 2 22:19:17.282: INFO: stdout: "true" Feb 2 22:19:17.282: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:19:17.483: INFO: stderr: "" Feb 2 22:19:17.483: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:19:17.483: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:19:17.531: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:19:17.531: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:19:17.531: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:19:17.531: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:19:17.730: INFO: stderr: "" Feb 2 22:19:17.730: INFO: stdout: "" Feb 2 22:19:17.730: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:19:22.731: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 22:19:22.937: INFO: stderr: "" Feb 2 22:19:22.937: INFO: stdout: "update-demo-nautilus-gqsht update-demo-nautilus-tcxbs " Feb 2 22:19:22.937: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:19:23.137: INFO: stderr: "" Feb 2 22:19:23.137: INFO: stdout: "true" Feb 2 22:19:23.137: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-gqsht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 22:19:23.332: INFO: stderr: "" Feb 2 22:19:23.332: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Feb 2 22:19:23.332: INFO: validating pod update-demo-nautilus-gqsht Feb 2 22:19:23.380: INFO: got data: { "image": "nautilus.jpg" } Feb 2 22:19:23.380: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 22:19:23.380: INFO: update-demo-nautilus-gqsht is verified up and running Feb 2 22:19:23.380: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods update-demo-nautilus-tcxbs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 22:19:23.577: INFO: stderr: "" Feb 2 22:19:23.577: INFO: stdout: "" Feb 2 22:19:23.577: INFO: update-demo-nautilus-tcxbs is created but not running Feb 2 22:19:28.577: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2() test/e2e/kubectl/kubectl.go:315 +0x1ec k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000ae2680, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: using delete to clean up resources Feb 2 22:19:28.578: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 delete --grace-period=0 --force -f -' Feb 2 22:19:28.821: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 22:19:28.821: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 2 22:19:28.821: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get rc,svc -l name=update-demo --no-headers' Feb 2 22:19:29.062: INFO: stderr: "No resources found in kubectl-1758 namespace.\n" Feb 2 22:19:29.062: INFO: stdout: "" Feb 2 22:19:29.062: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=kubectl-1758 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 2 22:19:29.296: INFO: stderr: "" Feb 2 22:19:29.296: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/framework.go:188 �[1mSTEP�[0m: Collecting events from namespace "kubectl-1758". �[1mSTEP�[0m: Found 11 events. Feb 2 22:19:29.339: INFO: At 2023-02-02 22:14:23 +0000 UTC - event for update-demo-nautilus: {replication-controller } SuccessfulCreate: Created pod: update-demo-nautilus-gqsht Feb 2 22:19:29.339: INFO: At 2023-02-02 22:14:23 +0000 UTC - event for update-demo-nautilus: {replication-controller } SuccessfulCreate: Created pod: update-demo-nautilus-tcxbs Feb 2 22:19:29.339: INFO: At 2023-02-02 22:14:23 +0000 UTC - event for update-demo-nautilus-gqsht: {default-scheduler } Scheduled: Successfully assigned kubectl-1758/update-demo-nautilus-gqsht to e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:19:29.339: INFO: At 2023-02-02 22:14:23 +0000 UTC - event for update-demo-nautilus-tcxbs: {default-scheduler } Scheduled: Successfully assigned kubectl-1758/update-demo-nautilus-tcxbs to e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:19:29.339: INFO: At 2023-02-02 22:14:25 +0000 UTC - event for update-demo-nautilus-gqsht: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Pulled: Container image "k8s.gcr.io/e2e-test-images/nautilus:1.5" already present on machine Feb 2 22:19:29.339: INFO: At 2023-02-02 22:14:25 +0000 UTC - event for update-demo-nautilus-gqsht: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Created: Created container update-demo Feb 2 22:19:29.339: INFO: At 2023-02-02 22:14:25 +0000 UTC - event for update-demo-nautilus-tcxbs: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Pulled: Container image "k8s.gcr.io/e2e-test-images/nautilus:1.5" already present on machine Feb 2 22:19:29.339: INFO: At 2023-02-02 22:14:26 +0000 UTC - event for update-demo-nautilus-tcxbs: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Created: Created container update-demo Feb 2 22:19:29.339: INFO: At 2023-02-02 22:14:27 +0000 UTC - event for update-demo-nautilus-gqsht: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Started: Started container update-demo Feb 2 22:19:29.339: INFO: At 2023-02-02 22:16:26 +0000 UTC - event for update-demo-nautilus-tcxbs: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Failed: Error: context deadline exceeded Feb 2 22:19:29.339: INFO: At 2023-02-02 22:19:28 +0000 UTC - event for update-demo-nautilus-gqsht: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Killing: Stopping container update-demo Feb 2 22:19:29.381: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 22:19:29.381: INFO: update-demo-nautilus-gqsht e2e-7d89e54d79-37bac-windows-node-group-q21f Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:14:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:14:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:14:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:14:23 +0000 UTC }] Feb 2 22:19:29.381: INFO: update-demo-nautilus-tcxbs e2e-7d89e54d79-37bac-windows-node-group-k0qm Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:14:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:14:23 +0000 UTC ContainersNotReady containers with unready status: [update-demo]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:14:23 +0000 UTC ContainersNotReady containers with unready status: [update-demo]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:14:23 +0000 UTC }] Feb 2 22:19:29.381: INFO: Feb 2 22:19:29.521: INFO: Unable to fetch kubectl-1758/update-demo-nautilus-tcxbs/update-demo logs: the server rejected our request for an unknown reason (get pods update-demo-nautilus-tcxbs) Feb 2 22:19:29.568: INFO: Logging node info for node e2e-7d89e54d79-37bac-master Feb 2 22:19:29.613: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-master b403d958-aef5-4e5e-9b07-9812dc3e7d8b 19304 0 2023-02-02 21:23:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kubelet Update v1 2023-02-02 21:23:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3864313856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3602169856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:14:59 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:14:59 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:14:59 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:14:59 +0000 UTC,LastTransitionTime:2023-02-02 21:23:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.98.204,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e8df098cf83a91bc3c7c2a97ba5a41e9,SystemUUID:e8df098c-f83a-91bc-3c7c-2a97ba5a41e9,BootID:59df2086-4103-4b38-9939-c916841efb98,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:131733971,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:121342787,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:52751170,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:be60ef505fc80879eeb5d8bf3ad8bb1146b395afc2394584645e99431806c26c gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.12.0],SizeBytes:32705362,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:d863f7fd0da4392b9753dc6c9195a658e80d70e0be8c9adb410d77cf20b75c76 registry.k8s.io/kas-network-proxy/proxy-server:v0.0.35],SizeBytes:21985251,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:19:29.613: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-master Feb 2 22:19:29.658: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-master Feb 2 22:19:29.733: INFO: etcd-server-events-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:29.733: INFO: Container etcd-container ready: true, restart count 0 Feb 2 22:19:29.733: INFO: kube-addon-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:29.733: INFO: Container kube-addon-manager ready: true, restart count 0 Feb 2 22:19:29.733: INFO: kube-controller-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:29.733: INFO: Container kube-controller-manager ready: true, restart count 2 Feb 2 22:19:29.733: INFO: kube-apiserver-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:29.733: INFO: Container kube-apiserver ready: true, restart count 0 Feb 2 22:19:29.733: INFO: kube-scheduler-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:29.733: INFO: Container kube-scheduler ready: true, restart count 0 Feb 2 22:19:29.733: INFO: etcd-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:29.733: INFO: Container etcd-container ready: true, restart count 0 Feb 2 22:19:29.733: INFO: l7-lb-controller-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:29.733: INFO: Container l7-lb-controller ready: true, restart count 3 Feb 2 22:19:29.733: INFO: metadata-proxy-v0.1-fmxnz started at 2023-02-02 21:23:46 +0000 UTC (0+2 container statuses recorded) Feb 2 22:19:29.733: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 22:19:29.733: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 22:19:29.733: INFO: konnectivity-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:29.733: INFO: Container konnectivity-server-container ready: true, restart count 0 Feb 2 22:19:29.941: INFO: Latency metrics for node e2e-7d89e54d79-37bac-master Feb 2 22:19:29.942: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:19:29.985: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-1vp1 d81fe224-05dd-48a7-9693-e2f2826a1b97 19976 0 2023-02-02 21:23:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-1vp1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-02-02 21:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.5.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-02-02 21:23:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-1vp1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.5.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:56 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:56 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:56 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:17:56 +0000 UTC,LastTransitionTime:2023-02-02 21:23:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.7,},NodeAddress{Type:ExternalIP,Address:35.197.102.154,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ade76a2ad94b4b90c3b7ba811704d98c,SystemUUID:ade76a2a-d94b-4b90-c3b7-ba811704d98c,BootID:29452487-f38a-42cd-8605-aecb73730dd9,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0],SizeBytes:18952261,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:19:29.985: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:19:30.029: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:19:30.091: INFO: metadata-proxy-v0.1-kmxp5 started at 2023-02-02 21:23:39 +0000 UTC (0+2 container statuses recorded) Feb 2 22:19:30.091: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 22:19:30.091: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 22:19:30.091: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-1vp1 started at 2023-02-02 21:23:38 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:30.091: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 22:19:30.091: INFO: coredns-8c79ffd8b-rd5tr started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:30.091: INFO: Container coredns ready: true, restart count 0 Feb 2 22:19:30.091: INFO: l7-default-backend-8667cd4ffc-pgmnb started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:30.091: INFO: Container default-http-backend ready: true, restart count 0 Feb 2 22:19:30.091: INFO: volume-snapshot-controller-0 started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:30.091: INFO: Container volume-snapshot-controller ready: true, restart count 0 Feb 2 22:19:30.091: INFO: konnectivity-agent-mn5mq started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:30.091: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 22:19:30.091: INFO: kube-dns-autoscaler-596f6cf79f-v76jk started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:30.091: INFO: Container autoscaler ready: true, restart count 0 Feb 2 22:19:30.260: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:19:30.260: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:19:30.303: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-fhnf 9c0dcb7a-8a6b-4535-afe8-b62bf19173f7 19975 0 2023-02-02 21:23:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-fhnf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-02-02 21:23:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-fhnf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 22:18:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:39 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:16 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:16 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:16 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:17:16 +0000 UTC,LastTransitionTime:2023-02-02 21:23:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.6,},NodeAddress{Type:ExternalIP,Address:34.127.30.111,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8abcabc408b3bd715147992f3d5a5854,SystemUUID:8abcabc4-08b3-bd71-5147-992f3d5a5854,BootID:8d0324c2-172b-4c43-81ee-83b6878e11ee,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:19:30.304: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:19:30.347: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:19:30.409: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-fhnf started at 2023-02-02 21:23:39 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:30.409: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 22:19:30.409: INFO: metrics-server-v0.5.2-6d6794c8cd-9vklc started at 2023-02-02 21:24:01 +0000 UTC (0+2 container statuses recorded) Feb 2 22:19:30.409: INFO: Container metrics-server ready: true, restart count 0 Feb 2 22:19:30.409: INFO: Container metrics-server-nanny ready: true, restart count 0 Feb 2 22:19:30.409: INFO: coredns-8c79ffd8b-4v5p9 started at 2023-02-02 21:23:54 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:30.409: INFO: Container coredns ready: true, restart count 0 Feb 2 22:19:30.409: INFO: metadata-proxy-v0.1-xl4fd started at 2023-02-02 21:23:40 +0000 UTC (0+2 container statuses recorded) Feb 2 22:19:30.410: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 22:19:30.410: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 22:19:30.410: INFO: konnectivity-agent-k667p started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:30.410: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 22:19:30.639: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:19:30.639: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:19:30.735: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-jllf 1f67dcc8-9253-4e93-8b90-78810a8df879 19770 0 2023-02-02 21:29:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-jllf kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-jllf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:21 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:21 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:21 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:17:21 +0000 UTC,LastTransitionTime:2023-02-02 21:29:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.75.252,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-jllf,SystemUUID:23C88569-8B16-0615-6BFF-BB819EADA98A,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:203784192,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:19:30.736: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:19:30.791: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:19:30.869: INFO: ss2-2 started at 2023-02-02 22:10:21 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:30.869: INFO: Container webserver ready: true, restart count 0 Feb 2 22:19:31.079: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:19:31.079: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:19:31.125: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-k0qm 91cb59e3-df60-4007-bdc1-bb197e591e43 19970 0 2023-02-02 21:29:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-k0qm kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2023-02-02 21:29:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:30:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-k0qm,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:24 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:18:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:18:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:18:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:18:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.1.208,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-k0qm,SystemUUID:FC53E984-3141-4AB0-99D2-83726BB3072F,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/windows-nanoserver@sha256:fb9b25770487567c02bf90dd3edea7917323556d1b7ba81ec042ffd5f9effeae gcr.io/authenticated-image-pulling/windows-nanoserver:v1],SizeBytes:101148102,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:19:31.125: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:19:31.170: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:19:31.219: INFO: ss2-1 started at 2023-02-02 22:09:17 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:31.219: INFO: Container webserver ready: false, restart count 0 Feb 2 22:19:31.219: INFO: update-demo-nautilus-tcxbs started at 2023-02-02 22:14:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:19:31.219: INFO: Container update-demo ready: false, restart count 1 Feb 2 22:21:31.317: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:21:31.362: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-q21f eef3ae47-aa0d-4af8-87e8-4c4de04eace2 19722 0 2023-02-02 21:29:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-q21f kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-q21f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:14 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:00 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:00 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:00 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:17:00 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.230.207,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-q21f,SystemUUID:B1BBE679-4138-5169-4472-E3B13289F193,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:203784192,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:21:31.362: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:21:31.406: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:21:31.452: INFO: ss2-0 started at 2023-02-02 22:09:09 +0000 UTC (0+1 container statuses recorded) Feb 2 22:21:31.452: INFO: Container webserver ready: true, restart count 0 Feb 2 22:21:31.662: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:21:31.662: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1758" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\sserve\sa\sbasic\sendpoint\sfrom\spods\s\s\[Conformance\]$'
test/e2e/framework/framework.go:652 Feb 2 22:15:49.092: Unexpected error: <*errors.errorString | 0xc00022a1e0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred test/e2e/framework/pods.go:107from junit_02.xml
[BeforeEach] [sig-network] Services test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 22:10:03.646: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services test/e2e/network/service.go:758 [It] should serve a basic endpoint from pods [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating service endpoint-test2 in namespace services-6696 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6696 to expose endpoints map[] Feb 2 22:10:04.141: INFO: successfully validated that service endpoint-test2 in namespace services-6696 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-6696 Feb 2 22:10:04.242: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:10:06.285: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:10:08.294: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:10:10.285: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6696 to expose endpoints map[pod1:[80]] Feb 2 22:10:10.508: INFO: successfully validated that service endpoint-test2 in namespace services-6696 exposes endpoints map[pod1:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod1 Feb 2 22:10:10.508: INFO: Creating new exec pod Feb 2 22:10:47.645: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=services-6696 exec execpod4ggdl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Feb 2 22:10:48.318: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nendpoint-test2.services-6696.svc.cluster.local [10.0.135.171] 80 (http) open\r\n" Feb 2 22:10:48.318: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Feb 2 22:10:48.318: INFO: Running '/home/prow/go/src/sigs.k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://35.247.98.204 --kubeconfig=/workspace/.kube/config --namespace=services-6696 exec execpod4ggdl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.135.171 80' Feb 2 22:10:48.915: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.0.135.171 80\nendpoint-test2.services-6696.svc.cluster.local [10.0.135.171] 80 (http) open\r\n" Feb 2 22:10:48.915: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Creating pod pod2 in namespace services-6696 Feb 2 22:10:49.007: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:10:51.060: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:10:53.058: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:10:55.071: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:10:57.060: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:10:59.057: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:01.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:03.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:05.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:07.086: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:09.054: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:11.057: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:13.049: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:15.053: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:17.049: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:19.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:21.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:23.055: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:25.053: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:27.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:29.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:31.053: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:33.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:35.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:37.049: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:39.049: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:41.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:43.117: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:45.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:47.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:49.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:51.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:53.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:55.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:57.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:11:59.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:01.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:03.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:05.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:07.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:09.052: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:11.052: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:13.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:15.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:17.054: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:19.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:21.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:23.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:25.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:27.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:29.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:31.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:33.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:35.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:37.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:39.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:41.052: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:43.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:45.052: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:47.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:49.068: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:51.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:53.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:55.052: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:57.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:12:59.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:01.053: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:03.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:05.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:07.053: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:09.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:11.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:13.053: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:15.055: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:17.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:19.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:21.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:23.062: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:25.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:27.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:29.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:31.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:33.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:35.052: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:37.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:39.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:41.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:43.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:45.052: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:47.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:49.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:51.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:53.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:55.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:57.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:13:59.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:01.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:03.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:05.049: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:07.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:09.052: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:11.094: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:13.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:15.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:17.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:19.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:21.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:23.124: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:25.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:27.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:29.049: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:31.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:33.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:35.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:37.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:39.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:41.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:43.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:45.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:47.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:49.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:51.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:53.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:55.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:57.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:14:59.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:01.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:03.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:05.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:07.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:09.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:11.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:13.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:15.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:17.059: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:19.049: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:21.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:23.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:25.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:27.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:29.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:31.054: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:33.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:35.051: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:37.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:39.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:41.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:43.057: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:45.052: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:47.052: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:49.050: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:49.092: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:15:49.092: FAIL: Unexpected error: <*errors.errorString | 0xc00022a1e0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc0027cc930, 0x0?) test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/network.createPodOrFail(0x0?, {0xc00323c730, 0xd}, {0x71727c5, 0x4}, 0xc003efec90, {0xc003a2a480, 0x1, 0x1}, {0xc003429da0, ...}) test/e2e/network/service.go:3876 +0x255 k8s.io/kubernetes/test/e2e/network.glob..func25.4() test/e2e/network/service.go:826 +0x57e k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0006fa340, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-network] Services test/e2e/framework/framework.go:188 �[1mSTEP�[0m: Collecting events from namespace "services-6696". �[1mSTEP�[0m: Found 14 events. Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:04 +0000 UTC - event for pod1: {default-scheduler } Scheduled: Successfully assigned services-6696/pod1 to e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:06 +0000 UTC - event for pod1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:06 +0000 UTC - event for pod1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Created: Created container agnhost-container Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:07 +0000 UTC - event for pod1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Started: Started container agnhost-container Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:10 +0000 UTC - event for execpod4ggdl: {default-scheduler } Scheduled: Successfully assigned services-6696/execpod4ggdl to e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:32 +0000 UTC - event for execpod4ggdl: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:36 +0000 UTC - event for execpod4ggdl: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Created: Created container agnhost-container Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:45 +0000 UTC - event for execpod4ggdl: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Started: Started container agnhost-container Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:48 +0000 UTC - event for pod2: {default-scheduler } Scheduled: Successfully assigned services-6696/pod2 to e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:56 +0000 UTC - event for pod2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Feb 2 22:15:49.307: INFO: At 2023-02-02 22:10:57 +0000 UTC - event for pod2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Created: Created container agnhost-container Feb 2 22:15:49.307: INFO: At 2023-02-02 22:12:57 +0000 UTC - event for pod2: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Failed: Error: context deadline exceeded Feb 2 22:15:49.307: INFO: At 2023-02-02 22:15:48 +0000 UTC - event for pod1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-q21f} Killing: Stopping container agnhost-container Feb 2 22:15:49.307: INFO: At 2023-02-02 22:15:49 +0000 UTC - event for endpoint-test2: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-6696/endpoint-test2: Operation cannot be fulfilled on endpoints "endpoint-test2": the object has been modified; please apply your changes to the latest version and try again Feb 2 22:15:49.350: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 22:15:49.350: INFO: execpod4ggdl e2e-7d89e54d79-37bac-windows-node-group-k0qm Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:10:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:10:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:10:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:10:10 +0000 UTC }] Feb 2 22:15:49.350: INFO: pod2 e2e-7d89e54d79-37bac-windows-node-group-k0qm Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:10:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:10:48 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:10:48 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 22:10:48 +0000 UTC }] Feb 2 22:15:49.350: INFO: Feb 2 22:15:49.479: INFO: Unable to fetch services-6696/pod2/agnhost-container logs: the server rejected our request for an unknown reason (get pods pod2) Feb 2 22:15:49.533: INFO: Logging node info for node e2e-7d89e54d79-37bac-master Feb 2 22:15:49.576: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-master b403d958-aef5-4e5e-9b07-9812dc3e7d8b 19304 0 2023-02-02 21:23:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kubelet Update v1 2023-02-02 21:23:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3864313856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3602169856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:14:59 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:14:59 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:14:59 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:14:59 +0000 UTC,LastTransitionTime:2023-02-02 21:23:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.98.204,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e8df098cf83a91bc3c7c2a97ba5a41e9,SystemUUID:e8df098c-f83a-91bc-3c7c-2a97ba5a41e9,BootID:59df2086-4103-4b38-9939-c916841efb98,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:131733971,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:121342787,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:52751170,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:be60ef505fc80879eeb5d8bf3ad8bb1146b395afc2394584645e99431806c26c gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.12.0],SizeBytes:32705362,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:d863f7fd0da4392b9753dc6c9195a658e80d70e0be8c9adb410d77cf20b75c76 registry.k8s.io/kas-network-proxy/proxy-server:v0.0.35],SizeBytes:21985251,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:15:49.577: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-master Feb 2 22:15:49.622: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-master Feb 2 22:15:49.710: INFO: metadata-proxy-v0.1-fmxnz started at 2023-02-02 21:23:46 +0000 UTC (0+2 container statuses recorded) Feb 2 22:15:49.710: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 22:15:49.710: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 22:15:49.710: INFO: konnectivity-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:49.710: INFO: Container konnectivity-server-container ready: true, restart count 0 Feb 2 22:15:49.710: INFO: kube-apiserver-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:49.710: INFO: Container kube-apiserver ready: true, restart count 0 Feb 2 22:15:49.710: INFO: kube-scheduler-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:49.710: INFO: Container kube-scheduler ready: true, restart count 0 Feb 2 22:15:49.710: INFO: etcd-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:49.710: INFO: Container etcd-container ready: true, restart count 0 Feb 2 22:15:49.710: INFO: l7-lb-controller-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:49.710: INFO: Container l7-lb-controller ready: true, restart count 3 Feb 2 22:15:49.710: INFO: kube-controller-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:49.710: INFO: Container kube-controller-manager ready: true, restart count 2 Feb 2 22:15:49.710: INFO: etcd-server-events-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:49.710: INFO: Container etcd-container ready: true, restart count 0 Feb 2 22:15:49.710: INFO: kube-addon-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:49.710: INFO: Container kube-addon-manager ready: true, restart count 0 Feb 2 22:15:49.936: INFO: Latency metrics for node e2e-7d89e54d79-37bac-master Feb 2 22:15:49.936: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:15:49.980: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-1vp1 d81fe224-05dd-48a7-9693-e2f2826a1b97 18987 0 2023-02-02 21:23:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-1vp1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-02-02 21:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.5.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-02-02 21:23:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-1vp1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.5.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 22:13:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 22:13:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 22:13:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 22:13:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 22:13:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 22:13:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 22:13:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:12:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:12:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:12:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:12:48 +0000 UTC,LastTransitionTime:2023-02-02 21:23:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.7,},NodeAddress{Type:ExternalIP,Address:35.197.102.154,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ade76a2ad94b4b90c3b7ba811704d98c,SystemUUID:ade76a2a-d94b-4b90-c3b7-ba811704d98c,BootID:29452487-f38a-42cd-8605-aecb73730dd9,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0],SizeBytes:18952261,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:15:49.981: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:15:50.028: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:15:50.101: INFO: metadata-proxy-v0.1-kmxp5 started at 2023-02-02 21:23:39 +0000 UTC (0+2 container statuses recorded) Feb 2 22:15:50.101: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 22:15:50.101: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 22:15:50.101: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-1vp1 started at 2023-02-02 21:23:38 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:50.101: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 22:15:50.101: INFO: coredns-8c79ffd8b-rd5tr started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:50.101: INFO: Container coredns ready: true, restart count 0 Feb 2 22:15:50.101: INFO: l7-default-backend-8667cd4ffc-pgmnb started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:50.101: INFO: Container default-http-backend ready: true, restart count 0 Feb 2 22:15:50.101: INFO: volume-snapshot-controller-0 started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:50.101: INFO: Container volume-snapshot-controller ready: true, restart count 0 Feb 2 22:15:50.101: INFO: konnectivity-agent-mn5mq started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:50.101: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 22:15:50.101: INFO: kube-dns-autoscaler-596f6cf79f-v76jk started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:50.101: INFO: Container autoscaler ready: true, restart count 0 Feb 2 22:15:50.298: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 22:15:50.298: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:15:50.342: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-fhnf 9c0dcb7a-8a6b-4535-afe8-b62bf19173f7 18993 0 2023-02-02 21:23:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-fhnf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-02-02 21:23:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-fhnf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 22:13:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 22:13:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 22:13:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 22:13:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 22:13:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 22:13:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 22:13:49 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:39 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:12:10 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:12:10 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:12:10 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:12:10 +0000 UTC,LastTransitionTime:2023-02-02 21:23:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.6,},NodeAddress{Type:ExternalIP,Address:34.127.30.111,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8abcabc408b3bd715147992f3d5a5854,SystemUUID:8abcabc4-08b3-bd71-5147-992f3d5a5854,BootID:8d0324c2-172b-4c43-81ee-83b6878e11ee,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:15:50.342: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:15:50.386: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:15:50.454: INFO: metadata-proxy-v0.1-xl4fd started at 2023-02-02 21:23:40 +0000 UTC (0+2 container statuses recorded) Feb 2 22:15:50.454: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 22:15:50.454: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 22:15:50.454: INFO: konnectivity-agent-k667p started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:50.454: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 22:15:50.454: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-fhnf started at 2023-02-02 21:23:39 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:50.454: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 22:15:50.454: INFO: metrics-server-v0.5.2-6d6794c8cd-9vklc started at 2023-02-02 21:24:01 +0000 UTC (0+2 container statuses recorded) Feb 2 22:15:50.454: INFO: Container metrics-server ready: true, restart count 0 Feb 2 22:15:50.454: INFO: Container metrics-server-nanny ready: true, restart count 0 Feb 2 22:15:50.454: INFO: coredns-8c79ffd8b-4v5p9 started at 2023-02-02 21:23:54 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:50.454: INFO: Container coredns ready: true, restart count 0 Feb 2 22:15:50.662: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 22:15:50.662: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:15:50.706: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-jllf 1f67dcc8-9253-4e93-8b90-78810a8df879 18606 0 2023-02-02 21:29:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-jllf kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-jllf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:12:15 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:12:15 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:12:15 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:12:15 +0000 UTC,LastTransitionTime:2023-02-02 21:29:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.75.252,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-jllf,SystemUUID:23C88569-8B16-0615-6BFF-BB819EADA98A,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:203784192,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:15:50.706: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:15:50.753: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:15:50.830: INFO: ss2-2 started at 2023-02-02 22:10:21 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:50.830: INFO: Container webserver ready: true, restart count 0 Feb 2 22:15:51.043: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 22:15:51.043: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:15:51.096: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-k0qm 91cb59e3-df60-4007-bdc1-bb197e591e43 18930 0 2023-02-02 21:29:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-k0qm kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2023-02-02 21:29:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:30:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-k0qm,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:24 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:13:41 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:13:41 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:13:41 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:13:41 +0000 UTC,LastTransitionTime:2023-02-02 21:29:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.1.208,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-k0qm,SystemUUID:FC53E984-3141-4AB0-99D2-83726BB3072F,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/windows-nanoserver@sha256:fb9b25770487567c02bf90dd3edea7917323556d1b7ba81ec042ffd5f9effeae gcr.io/authenticated-image-pulling/windows-nanoserver:v1],SizeBytes:101148102,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:15:51.097: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:15:51.144: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 22:15:51.192: INFO: ss2-1 started at 2023-02-02 22:09:17 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:51.192: INFO: Container webserver ready: false, restart count 0 Feb 2 22:15:51.192: INFO: pod2 started at 2023-02-02 22:10:48 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:51.192: INFO: Container agnhost-container ready: false, restart count 1 Feb 2 22:15:51.192: INFO: update-demo-nautilus-tcxbs started at 2023-02-02 22:14:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:51.192: INFO: Container update-demo ready: false, restart count 0 Feb 2 22:15:51.192: INFO: execpod4ggdl started at 2023-02-02 22:10:10 +0000 UTC (0+1 container statuses recorded) Feb 2 22:15:51.192: INFO: Container agnhost-container ready: true, restart count 0 Feb 2 22:17:51.303: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:17:51.347: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-q21f eef3ae47-aa0d-4af8-87e8-4c4de04eace2 19722 0 2023-02-02 21:29:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-q21f kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-q21f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:14 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:00 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:00 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 22:17:00 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 22:17:00 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.230.207,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-q21f,SystemUUID:B1BBE679-4138-5169-4472-E3B13289F193,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:203784192,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 22:17:51.347: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:17:51.392: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:17:51.456: INFO: update-demo-nautilus-gqsht started at 2023-02-02 22:14:23 +0000 UTC (0+1 container statuses recorded) Feb 2 22:17:51.456: INFO: Container update-demo ready: true, restart count 0 Feb 2 22:17:51.456: INFO: ss2-0 started at 2023-02-02 22:09:09 +0000 UTC (0+1 container statuses recorded) Feb 2 22:17:51.456: INFO: Container webserver ready: true, restart count 0 Feb 2 22:17:51.681: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 22:17:51.681: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6696" for this suite. [AfterEach] [sig-network] Services test/e2e/network/service.go:762
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\sif\sTerminationMessagePath\sis\sset\sas\snon\-root\suser\sand\sat\sa\snon\-default\spath\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:652 Feb 2 21:54:37.337: Timed out after 300.001s. Expected <v1.PodPhase>: Failed to equal <v1.PodPhase>: Succeeded test/e2e/common/node/runtime.go:156from junit_02.xml
[BeforeEach] [sig-node] Container Runtime test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 21:49:36.949: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded Feb 2 21:54:37.337: FAIL: Timed out after 300.001s. Expected <v1.PodPhase>: Failed to equal <v1.PodPhase>: Succeeded Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.glob..func18.1.2.1({{0x71f2c4a, 0x1d}, {0xc000351050, 0x29}, {0xc001f3e120, 0x2, 0x2}, {0xc0062d01a0, 0x1, 0x1}, ...}, ...) test/e2e/common/node/runtime.go:156 +0x392 k8s.io/kubernetes/test/e2e/common/node.glob..func18.1.2.3() test/e2e/common/node/runtime.go:207 +0x28e k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0006fa340, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-node] Container Runtime test/e2e/framework/framework.go:188 �[1mSTEP�[0m: Collecting events from namespace "container-runtime-6992". �[1mSTEP�[0m: Found 4 events. Feb 2 21:54:37.428: INFO: At 2023-02-02 21:49:37 +0000 UTC - event for termination-message-container73ea76c3-3c2c-4a5c-9384-38f5ecb6a318: {default-scheduler } Scheduled: Successfully assigned container-runtime-6992/termination-message-container73ea76c3-3c2c-4a5c-9384-38f5ecb6a318 to e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:54:37.428: INFO: At 2023-02-02 21:49:39 +0000 UTC - event for termination-message-container73ea76c3-3c2c-4a5c-9384-38f5ecb6a318: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Feb 2 21:54:37.428: INFO: At 2023-02-02 21:49:39 +0000 UTC - event for termination-message-container73ea76c3-3c2c-4a5c-9384-38f5ecb6a318: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Created: Created container termination-message-container Feb 2 21:54:37.428: INFO: At 2023-02-02 21:49:41 +0000 UTC - event for termination-message-container73ea76c3-3c2c-4a5c-9384-38f5ecb6a318: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Started: Started container termination-message-container Feb 2 21:54:37.477: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 21:54:37.477: INFO: Feb 2 21:54:37.534: INFO: Logging node info for node e2e-7d89e54d79-37bac-master Feb 2 21:54:37.579: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-master b403d958-aef5-4e5e-9b07-9812dc3e7d8b 10743 0 2023-02-02 21:23:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kubelet Update v1 2023-02-02 21:23:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3864313856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3602169856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:54:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.98.204,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e8df098cf83a91bc3c7c2a97ba5a41e9,SystemUUID:e8df098c-f83a-91bc-3c7c-2a97ba5a41e9,BootID:59df2086-4103-4b38-9939-c916841efb98,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:131733971,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:121342787,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:52751170,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:be60ef505fc80879eeb5d8bf3ad8bb1146b395afc2394584645e99431806c26c gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.12.0],SizeBytes:32705362,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:d863f7fd0da4392b9753dc6c9195a658e80d70e0be8c9adb410d77cf20b75c76 registry.k8s.io/kas-network-proxy/proxy-server:v0.0.35],SizeBytes:21985251,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:54:37.579: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-master Feb 2 21:54:37.632: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-master Feb 2 21:54:37.688: INFO: l7-lb-controller-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:37.688: INFO: Container l7-lb-controller ready: true, restart count 3 Feb 2 21:54:37.688: INFO: metadata-proxy-v0.1-fmxnz started at 2023-02-02 21:23:46 +0000 UTC (0+2 container statuses recorded) Feb 2 21:54:37.688: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:54:37.688: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:54:37.688: INFO: konnectivity-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:37.688: INFO: Container konnectivity-server-container ready: true, restart count 0 Feb 2 21:54:37.688: INFO: kube-apiserver-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:37.688: INFO: Container kube-apiserver ready: true, restart count 0 Feb 2 21:54:37.688: INFO: kube-scheduler-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:37.688: INFO: Container kube-scheduler ready: true, restart count 0 Feb 2 21:54:37.688: INFO: etcd-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:37.688: INFO: Container etcd-container ready: true, restart count 0 Feb 2 21:54:37.688: INFO: kube-controller-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:37.688: INFO: Container kube-controller-manager ready: true, restart count 2 Feb 2 21:54:37.688: INFO: etcd-server-events-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:37.688: INFO: Container etcd-container ready: true, restart count 0 Feb 2 21:54:37.688: INFO: kube-addon-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:37.688: INFO: Container kube-addon-manager ready: true, restart count 0 Feb 2 21:54:37.907: INFO: Latency metrics for node e2e-7d89e54d79-37bac-master Feb 2 21:54:37.907: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:54:37.951: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-1vp1 d81fe224-05dd-48a7-9693-e2f2826a1b97 10580 0 2023-02-02 21:23:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-1vp1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-02-02 21:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.5.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-02-02 21:23:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-1vp1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.5.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:42 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:42 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:42 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:51:42 +0000 UTC,LastTransitionTime:2023-02-02 21:23:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.7,},NodeAddress{Type:ExternalIP,Address:35.197.102.154,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ade76a2ad94b4b90c3b7ba811704d98c,SystemUUID:ade76a2a-d94b-4b90-c3b7-ba811704d98c,BootID:29452487-f38a-42cd-8605-aecb73730dd9,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0],SizeBytes:18952261,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:54:37.951: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:54:37.995: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:54:38.064: INFO: metadata-proxy-v0.1-kmxp5 started at 2023-02-02 21:23:39 +0000 UTC (0+2 container statuses recorded) Feb 2 21:54:38.064: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:54:38.064: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:54:38.064: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-1vp1 started at 2023-02-02 21:23:38 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.064: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 21:54:38.064: INFO: coredns-8c79ffd8b-rd5tr started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.064: INFO: Container coredns ready: true, restart count 0 Feb 2 21:54:38.064: INFO: l7-default-backend-8667cd4ffc-pgmnb started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.064: INFO: Container default-http-backend ready: true, restart count 0 Feb 2 21:54:38.064: INFO: volume-snapshot-controller-0 started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.064: INFO: Container volume-snapshot-controller ready: true, restart count 0 Feb 2 21:54:38.064: INFO: konnectivity-agent-mn5mq started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.064: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 21:54:38.064: INFO: kube-dns-autoscaler-596f6cf79f-v76jk started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.064: INFO: Container autoscaler ready: true, restart count 0 Feb 2 21:54:38.252: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:54:38.252: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:54:38.295: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-fhnf 9c0dcb7a-8a6b-4535-afe8-b62bf19173f7 10583 0 2023-02-02 21:23:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-fhnf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-02-02 21:23:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-fhnf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:39 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:51:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.6,},NodeAddress{Type:ExternalIP,Address:34.127.30.111,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8abcabc408b3bd715147992f3d5a5854,SystemUUID:8abcabc4-08b3-bd71-5147-992f3d5a5854,BootID:8d0324c2-172b-4c43-81ee-83b6878e11ee,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:54:38.295: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:54:38.339: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:54:38.402: INFO: coredns-8c79ffd8b-4v5p9 started at 2023-02-02 21:23:54 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.402: INFO: Container coredns ready: true, restart count 0 Feb 2 21:54:38.402: INFO: metadata-proxy-v0.1-xl4fd started at 2023-02-02 21:23:40 +0000 UTC (0+2 container statuses recorded) Feb 2 21:54:38.402: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:54:38.402: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:54:38.402: INFO: konnectivity-agent-k667p started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.402: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 21:54:38.402: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-fhnf started at 2023-02-02 21:23:39 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.402: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 21:54:38.402: INFO: metrics-server-v0.5.2-6d6794c8cd-9vklc started at 2023-02-02 21:24:01 +0000 UTC (0+2 container statuses recorded) Feb 2 21:54:38.402: INFO: Container metrics-server ready: true, restart count 0 Feb 2 21:54:38.402: INFO: Container metrics-server-nanny ready: true, restart count 0 Feb 2 21:54:38.586: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:54:38.586: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:54:38.629: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-jllf 1f67dcc8-9253-4e93-8b90-78810a8df879 9351 0 2023-02-02 21:29:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-jllf kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-jllf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:48 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:48 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:48 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:51:48 +0000 UTC,LastTransitionTime:2023-02-02 21:29:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.75.252,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-jllf,SystemUUID:23C88569-8B16-0615-6BFF-BB819EADA98A,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:54:38.630: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:54:38.673: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:54:38.738: INFO: pod-configmaps-8ca3712a-d91b-488a-b20f-c3bd3d4cc732 started at 2023-02-02 21:53:26 +0000 UTC (0+3 container statuses recorded) Feb 2 21:54:38.738: INFO: Container createcm-volume-test ready: false, restart count 0 Feb 2 21:54:38.738: INFO: Container delcm-volume-test ready: false, restart count 0 Feb 2 21:54:38.738: INFO: Container updcm-volume-test ready: false, restart count 0 Feb 2 21:54:38.738: INFO: pod-service-account-defaultsa-mountspec started at 2023-02-02 21:53:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.738: INFO: Container token-test ready: false, restart count 0 Feb 2 21:54:38.738: INFO: pod-service-account-defaultsa started at 2023-02-02 21:53:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.738: INFO: Container token-test ready: false, restart count 0 Feb 2 21:54:38.738: INFO: pod-service-account-nomountsa-nomountspec started at 2023-02-02 21:53:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:54:38.738: INFO: Container token-test ready: false, restart count 0 Feb 2 21:55:49.816: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:55:49.816: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:55:49.859: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-k0qm 91cb59e3-df60-4007-bdc1-bb197e591e43 10576 0 2023-02-02 21:29:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-k0qm kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2023-02-02 21:29:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:30:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-k0qm,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:24 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:53:45 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:53:45 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:53:45 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:53:45 +0000 UTC,LastTransitionTime:2023-02-02 21:29:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.1.208,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-k0qm,SystemUUID:FC53E984-3141-4AB0-99D2-83726BB3072F,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:55:49.859: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:55:49.905: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:55:49.970: INFO: pod-service-account-nomountsa-mountspec started at 2023-02-02 21:53:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:55:49.970: INFO: Container token-test ready: true, restart count 0 Feb 2 21:55:49.970: INFO: pod-service-account-defaultsa-nomountspec started at 2023-02-02 21:53:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:55:49.970: INFO: Container token-test ready: true, restart count 0 Feb 2 21:55:49.970: INFO: pod-init-bc9e445a-d027-4afb-ac5b-34e8ba076460 started at 2023-02-02 21:54:02 +0000 UTC (2+1 container statuses recorded) Feb 2 21:55:49.970: INFO: Init container init1 ready: false, restart count 0 Feb 2 21:55:49.970: INFO: Init container init2 ready: false, restart count 0 Feb 2 21:55:49.970: INFO: Container run1 ready: false, restart count 0 Feb 2 21:55:49.970: INFO: nodeport-test-7226z started at 2023-02-02 21:54:07 +0000 UTC (0+1 container statuses recorded) Feb 2 21:55:49.970: INFO: Container netexec ready: false, restart count 0 Feb 2 21:57:20.556: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:57:20.556: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:57:20.598: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-q21f eef3ae47-aa0d-4af8-87e8-4c4de04eace2 10670 0 2023-02-02 21:29:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-q21f kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-q21f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:14 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:54:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.230.207,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-q21f,SystemUUID:B1BBE679-4138-5169-4472-E3B13289F193,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:57:20.598: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:57:20.642: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:57:21.637: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:57:21.637: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-6992" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-windows\]\sServices\sshould\sbe\sable\sto\screate\sa\sfunctioning\sNodePort\sservice\sfor\sWindows$'
test/e2e/windows/service.go:44 Feb 2 21:56:07.965: Unexpected error: <*errors.errorString | 0xc007606180>: { s: "failed waiting for pods to be running: timeout waiting for 1 pods to be ready", } failed waiting for pods to be running: timeout waiting for 1 pods to be ready occurred test/e2e/windows/service.go:68from junit_01.xml
[BeforeEach] [sig-windows] Services test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] Services test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 21:54:06.900: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-windows] Services test/e2e/windows/service.go:39 [It] should be able to create a functioning NodePort service for Windows test/e2e/windows/service.go:44 �[1mSTEP�[0m: creating service nodeport-test with type=NodePort in namespace services-4540 �[1mSTEP�[0m: creating Pod to be part of service nodeport-test Feb 2 21:54:07.335: INFO: Waiting up to 2m0s for 1 pods to be created Feb 2 21:54:07.381: INFO: Found all 1 pods Feb 2 21:54:07.381: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [nodeport-test-7226z] Feb 2 21:54:07.381: INFO: Waiting up to 2m0s for pod "nodeport-test-7226z" in namespace "services-4540" to be "running and ready" Feb 2 21:54:07.423: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 41.816267ms Feb 2 21:54:09.469: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087644535s Feb 2 21:54:11.512: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130505824s Feb 2 21:54:13.555: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174292827s Feb 2 21:54:15.598: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217010334s Feb 2 21:54:17.642: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.260772437s Feb 2 21:54:19.685: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.30441294s Feb 2 21:54:21.728: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 14.346725273s Feb 2 21:54:23.771: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 16.390243541s Feb 2 21:54:25.814: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 18.432941744s Feb 2 21:54:27.857: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 20.476235263s Feb 2 21:54:29.900: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 22.518625293s Feb 2 21:54:31.943: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 24.561854153s Feb 2 21:54:33.991: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 26.610052102s Feb 2 21:54:36.034: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 28.652579972s Feb 2 21:54:38.076: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 30.695035695s Feb 2 21:54:40.119: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 32.738040404s Feb 2 21:54:42.162: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 34.780546422s Feb 2 21:54:44.204: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 36.823221231s Feb 2 21:54:46.247: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 38.865909887s Feb 2 21:54:48.290: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 40.90889007s Feb 2 21:54:50.334: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 42.953201372s Feb 2 21:54:52.377: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 44.996248793s Feb 2 21:54:54.421: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 47.039907526s Feb 2 21:54:56.463: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 49.082140849s Feb 2 21:54:58.506: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 51.124713085s Feb 2 21:55:00.548: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 53.166943921s Feb 2 21:55:02.590: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 55.208904375s Feb 2 21:55:04.633: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 57.252283367s Feb 2 21:55:06.676: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 59.294685994s Feb 2 21:55:08.719: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.338083797s Feb 2 21:55:10.762: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.38112271s Feb 2 21:55:12.806: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.42458596s Feb 2 21:55:14.848: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.466929883s Feb 2 21:55:16.891: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.51015491s Feb 2 21:55:18.935: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.553730137s Feb 2 21:55:20.978: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.597061947s Feb 2 21:55:23.021: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.639450027s Feb 2 21:55:25.064: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.683015337s Feb 2 21:55:27.108: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.726673054s Feb 2 21:55:29.182: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.800547493s Feb 2 21:55:31.224: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.842811225s Feb 2 21:55:33.267: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.885439416s Feb 2 21:55:35.310: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m27.929149742s Feb 2 21:55:37.354: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m29.972462867s Feb 2 21:55:39.396: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.015213023s Feb 2 21:55:41.443: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.061445059s Feb 2 21:55:43.485: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.103908611s Feb 2 21:55:45.530: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.149387287s Feb 2 21:55:47.576: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.194837162s Feb 2 21:55:49.620: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.23873569s Feb 2 21:55:51.662: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.281367142s Feb 2 21:55:53.705: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.323838373s Feb 2 21:55:55.748: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.3667622s Feb 2 21:55:57.791: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.410202771s Feb 2 21:55:59.836: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.454650036s Feb 2 21:56:01.878: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.496997985s Feb 2 21:56:03.921: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.54039849s Feb 2 21:56:05.964: INFO: Pod "nodeport-test-7226z": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.583337158s Feb 2 21:56:07.965: INFO: Pod nodeport-test-7226z failed to be running and ready. Feb 2 21:56:07.965: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [nodeport-test-7226z] Feb 2 21:56:07.965: FAIL: Unexpected error: <*errors.errorString | 0xc007606180>: { s: "failed waiting for pods to be running: timeout waiting for 1 pods to be ready", } failed waiting for pods to be running: timeout waiting for 1 pods to be ready occurred Full Stack Trace k8s.io/kubernetes/test/e2e/windows.glob..func14.2() test/e2e/windows/service.go:68 +0x190 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e5201?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000d124e0, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-windows] Services test/e2e/framework/framework.go:188 �[1mSTEP�[0m: Collecting events from namespace "services-4540". �[1mSTEP�[0m: Found 2 events. Feb 2 21:56:08.007: INFO: At 2023-02-02 21:54:07 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-7226z Feb 2 21:56:08.008: INFO: At 2023-02-02 21:54:07 +0000 UTC - event for nodeport-test-7226z: {default-scheduler } Scheduled: Successfully assigned services-4540/nodeport-test-7226z to e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:56:08.049: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 21:56:08.049: INFO: nodeport-test-7226z e2e-7d89e54d79-37bac-windows-node-group-k0qm Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:54:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:54:07 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:54:07 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:54:07 +0000 UTC }] Feb 2 21:56:08.050: INFO: Feb 2 21:56:08.106: INFO: Unable to fetch services-4540/nodeport-test-7226z/netexec logs: the server rejected our request for an unknown reason (get pods nodeport-test-7226z) Feb 2 21:56:08.152: INFO: Logging node info for node e2e-7d89e54d79-37bac-master Feb 2 21:56:08.195: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-master b403d958-aef5-4e5e-9b07-9812dc3e7d8b 10743 0 2023-02-02 21:23:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kubelet Update v1 2023-02-02 21:23:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3864313856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3602169856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:54:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.98.204,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e8df098cf83a91bc3c7c2a97ba5a41e9,SystemUUID:e8df098c-f83a-91bc-3c7c-2a97ba5a41e9,BootID:59df2086-4103-4b38-9939-c916841efb98,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:131733971,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:121342787,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:52751170,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:be60ef505fc80879eeb5d8bf3ad8bb1146b395afc2394584645e99431806c26c gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.12.0],SizeBytes:32705362,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:d863f7fd0da4392b9753dc6c9195a658e80d70e0be8c9adb410d77cf20b75c76 registry.k8s.io/kas-network-proxy/proxy-server:v0.0.35],SizeBytes:21985251,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:56:08.196: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-master Feb 2 21:56:08.239: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-master Feb 2 21:56:08.298: INFO: kube-apiserver-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.298: INFO: Container kube-apiserver ready: true, restart count 0 Feb 2 21:56:08.298: INFO: kube-scheduler-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.298: INFO: Container kube-scheduler ready: true, restart count 0 Feb 2 21:56:08.298: INFO: etcd-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.298: INFO: Container etcd-container ready: true, restart count 0 Feb 2 21:56:08.298: INFO: l7-lb-controller-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.298: INFO: Container l7-lb-controller ready: true, restart count 3 Feb 2 21:56:08.298: INFO: metadata-proxy-v0.1-fmxnz started at 2023-02-02 21:23:46 +0000 UTC (0+2 container statuses recorded) Feb 2 21:56:08.298: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:56:08.298: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:56:08.298: INFO: konnectivity-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.298: INFO: Container konnectivity-server-container ready: true, restart count 0 Feb 2 21:56:08.298: INFO: etcd-server-events-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.298: INFO: Container etcd-container ready: true, restart count 0 Feb 2 21:56:08.298: INFO: kube-addon-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.298: INFO: Container kube-addon-manager ready: true, restart count 0 Feb 2 21:56:08.298: INFO: kube-controller-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.298: INFO: Container kube-controller-manager ready: true, restart count 2 Feb 2 21:56:08.486: INFO: Latency metrics for node e2e-7d89e54d79-37bac-master Feb 2 21:56:08.486: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:56:08.528: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-1vp1 d81fe224-05dd-48a7-9693-e2f2826a1b97 10580 0 2023-02-02 21:23:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-1vp1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-02-02 21:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.5.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-02-02 21:23:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-1vp1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.5.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 21:53:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:42 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:42 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:42 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:51:42 +0000 UTC,LastTransitionTime:2023-02-02 21:23:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.7,},NodeAddress{Type:ExternalIP,Address:35.197.102.154,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ade76a2ad94b4b90c3b7ba811704d98c,SystemUUID:ade76a2a-d94b-4b90-c3b7-ba811704d98c,BootID:29452487-f38a-42cd-8605-aecb73730dd9,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0],SizeBytes:18952261,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:56:08.529: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:56:08.572: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:56:08.634: INFO: kube-dns-autoscaler-596f6cf79f-v76jk started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.635: INFO: Container autoscaler ready: true, restart count 0 Feb 2 21:56:08.635: INFO: metadata-proxy-v0.1-kmxp5 started at 2023-02-02 21:23:39 +0000 UTC (0+2 container statuses recorded) Feb 2 21:56:08.635: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:56:08.635: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:56:08.635: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-1vp1 started at 2023-02-02 21:23:38 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.635: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 21:56:08.635: INFO: coredns-8c79ffd8b-rd5tr started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.635: INFO: Container coredns ready: true, restart count 0 Feb 2 21:56:08.635: INFO: l7-default-backend-8667cd4ffc-pgmnb started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.635: INFO: Container default-http-backend ready: true, restart count 0 Feb 2 21:56:08.635: INFO: volume-snapshot-controller-0 started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.635: INFO: Container volume-snapshot-controller ready: true, restart count 0 Feb 2 21:56:08.635: INFO: konnectivity-agent-mn5mq started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.635: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 21:56:08.804: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:56:08.805: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:56:08.847: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-fhnf 9c0dcb7a-8a6b-4535-afe8-b62bf19173f7 10583 0 2023-02-02 21:23:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-fhnf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-02-02 21:23:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-fhnf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 21:53:47 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:39 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:51:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.6,},NodeAddress{Type:ExternalIP,Address:34.127.30.111,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8abcabc408b3bd715147992f3d5a5854,SystemUUID:8abcabc4-08b3-bd71-5147-992f3d5a5854,BootID:8d0324c2-172b-4c43-81ee-83b6878e11ee,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:56:08.848: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:56:08.891: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:56:08.952: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-fhnf started at 2023-02-02 21:23:39 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.952: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 21:56:08.952: INFO: metrics-server-v0.5.2-6d6794c8cd-9vklc started at 2023-02-02 21:24:01 +0000 UTC (0+2 container statuses recorded) Feb 2 21:56:08.952: INFO: Container metrics-server ready: true, restart count 0 Feb 2 21:56:08.952: INFO: Container metrics-server-nanny ready: true, restart count 0 Feb 2 21:56:08.952: INFO: coredns-8c79ffd8b-4v5p9 started at 2023-02-02 21:23:54 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.952: INFO: Container coredns ready: true, restart count 0 Feb 2 21:56:08.952: INFO: metadata-proxy-v0.1-xl4fd started at 2023-02-02 21:23:40 +0000 UTC (0+2 container statuses recorded) Feb 2 21:56:08.952: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:56:08.952: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:56:08.952: INFO: konnectivity-agent-k667p started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:08.952: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 21:56:09.126: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:56:09.126: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:56:09.169: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-jllf 1f67dcc8-9253-4e93-8b90-78810a8df879 9351 0 2023-02-02 21:29:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-jllf kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-jllf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:48 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:48 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:51:48 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:51:48 +0000 UTC,LastTransitionTime:2023-02-02 21:29:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.75.252,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-jllf,SystemUUID:23C88569-8B16-0615-6BFF-BB819EADA98A,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:56:09.169: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:56:09.212: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:56:09.258: INFO: pod-service-account-defaultsa started at 2023-02-02 21:53:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:09.258: INFO: Container token-test ready: true, restart count 0 Feb 2 21:56:09.258: INFO: pod-service-account-nomountsa-nomountspec started at 2023-02-02 21:53:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:56:09.258: INFO: Container token-test ready: true, restart count 0 Feb 2 21:56:09.258: INFO: pod-configmaps-8ca3712a-d91b-488a-b20f-c3bd3d4cc732 started at 2023-02-02 21:53:26 +0000 UTC (0+3 container statuses recorded) Feb 2 21:56:09.258: INFO: Container createcm-volume-test ready: false, restart count 0 Feb 2 21:56:09.258: INFO: Container delcm-volume-test ready: false, restart count 0 Feb 2 21:56:09.258: INFO: Container updcm-volume-test ready: false, restart count 0 Feb 2 21:57:28.519: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:57:28.519: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:57:28.681: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-k0qm 91cb59e3-df60-4007-bdc1-bb197e591e43 10576 0 2023-02-02 21:29:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-k0qm kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2023-02-02 21:29:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:30:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-k0qm,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:24 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:53:45 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:53:45 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:53:45 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:53:45 +0000 UTC,LastTransitionTime:2023-02-02 21:29:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.1.208,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-k0qm,SystemUUID:FC53E984-3141-4AB0-99D2-83726BB3072F,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:57:28.682: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:57:28.841: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:57:29.087: INFO: pod-init-bc9e445a-d027-4afb-ac5b-34e8ba076460 started at 2023-02-02 21:54:02 +0000 UTC (2+1 container statuses recorded) Feb 2 21:57:29.087: INFO: Init container init1 ready: false, restart count 0 Feb 2 21:57:29.087: INFO: Init container init2 ready: false, restart count 0 Feb 2 21:57:29.087: INFO: Container run1 ready: false, restart count 0 Feb 2 21:57:29.087: INFO: nodeport-test-7226z started at 2023-02-02 21:54:07 +0000 UTC (0+1 container statuses recorded) Feb 2 21:57:29.087: INFO: Container netexec ready: false, restart count 0 Feb 2 21:58:02.851: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:58:02.852: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:58:02.895: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-q21f eef3ae47-aa0d-4af8-87e8-4c4de04eace2 10670 0 2023-02-02 21:29:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-q21f kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-q21f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:14 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:54:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:54:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.230.207,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-q21f,SystemUUID:B1BBE679-4138-5169-4472-E3B13289F193,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:58:02.895: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:58:02.939: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:58:03.197: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:58:03.197: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-4540" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-windows\]\s\[Feature\:WindowsHostProcessContainers\]\s\[MinimumKubeletVersion\:1\.22\]\sHostProcess\scontainers\scontainer\scommand\spath\svalidation$'
test/e2e/windows/host_process.go:197 Feb 2 21:42:18.314: wait for pod "host-process-command-1" to finish running Expected success, but got an error: <*errors.errorString | 0xc0047efac0>: { s: "Gave up after waiting 3m0s for pod \"host-process-command-1\" to be \"Succeeded or Failed\"", } Gave up after waiting 3m0s for pod "host-process-command-1" to be "Succeeded or Failed" test/e2e/framework/pods.go:257from junit_01.xml
[BeforeEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/windows/host_process.go:81 [BeforeEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 21:39:01.247: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename host-process-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] container command path validation test/e2e/windows/host_process.go:197 �[1mSTEP�[0m: Adding a container 'host-process-command-0-0' to pod 'host-process-command-0' with command: cmd.exe /c ver, args: , workingDir: �[1mSTEP�[0m: Adding a container 'host-process-command-0-1' to pod 'host-process-command-0' with command: System32\cmd.exe /c ver, args: , workingDir: c:\Windows �[1mSTEP�[0m: Adding a container 'host-process-command-0-2' to pod 'host-process-command-0' with command: System32\cmd.exe /c ver, args: , workingDir: c:\Windows\ �[1mSTEP�[0m: Adding a container 'host-process-command-0-3' to pod 'host-process-command-0' with command: %CONTAINER_SANDBOX_MOUNT_POINT%\bin\uname.exe -o, args: , workingDir: �[1mSTEP�[0m: Waiting for pod 'host-process-command-0' to run Feb 2 21:39:01.688: INFO: Waiting up to 3m0s for pod "host-process-command-0" in namespace "host-process-test-windows-4310" to be "Succeeded or Failed" Feb 2 21:39:01.731: INFO: Pod "host-process-command-0": Phase="Pending", Reason="", readiness=false. Elapsed: 43.143104ms Feb 2 21:39:03.775: INFO: Pod "host-process-command-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086567053s Feb 2 21:39:05.819: INFO: Pod "host-process-command-0": Phase="Running", Reason="", readiness=true. Elapsed: 4.131281761s Feb 2 21:39:07.864: INFO: Pod "host-process-command-0": Phase="Running", Reason="", readiness=true. Elapsed: 6.176298895s Feb 2 21:39:09.907: INFO: Pod "host-process-command-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.219229307s Feb 2 21:39:12.009: INFO: Pod "host-process-command-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.32047718s Feb 2 21:39:14.060: INFO: Pod "host-process-command-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.372089655s Feb 2 21:39:16.105: INFO: Pod "host-process-command-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.416721055s Feb 2 21:39:18.154: INFO: Pod "host-process-command-0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.465393152s Feb 2 21:39:18.154: INFO: Pod "host-process-command-0" satisfied condition "Succeeded or Failed" �[1mSTEP�[0m: Then ensuring pod finished running successfully �[1mSTEP�[0m: Adding a container 'host-process-command-1-0' to pod 'host-process-command-1' with command: %CONTAINER_SANDBOX_MOUNT_POINT%/bin/uname.exe -o, args: , workingDir: �[1mSTEP�[0m: Adding a container 'host-process-command-1-1' to pod 'host-process-command-1' with command: %CONTAINER_SANDBOX_MOUNT_POINT%\bin/uname.exe -o, args: , workingDir: �[1mSTEP�[0m: Adding a container 'host-process-command-1-2' to pod 'host-process-command-1' with command: bin/uname.exe -o, args: , workingDir: �[1mSTEP�[0m: Adding a container 'host-process-command-1-3' to pod 'host-process-command-1' with command: bin/uname.exe -o, args: , workingDir: %CONTAINER_SANDBOX_MOUNT_POINT% �[1mSTEP�[0m: Waiting for pod 'host-process-command-1' to run Feb 2 21:39:18.249: INFO: Waiting up to 3m0s for pod "host-process-command-1" in namespace "host-process-test-windows-4310" to be "Succeeded or Failed" Feb 2 21:39:18.294: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 45.015209ms Feb 2 21:39:20.380: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130909134s Feb 2 21:39:22.424: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17559221s Feb 2 21:39:24.470: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221367245s Feb 2 21:39:26.514: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265328672s Feb 2 21:39:28.645: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.396424687s Feb 2 21:39:30.690: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.441081463s Feb 2 21:39:32.745: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.4966485s Feb 2 21:39:34.790: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.541176497s Feb 2 21:39:36.834: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.58480945s Feb 2 21:39:38.877: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.62834147s Feb 2 21:39:40.922: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.673465205s Feb 2 21:39:42.967: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.718494843s Feb 2 21:39:45.012: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.76299408s Feb 2 21:39:47.056: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.807055357s Feb 2 21:39:49.100: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.851140596s Feb 2 21:39:51.145: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.896594521s Feb 2 21:39:53.190: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 34.941483271s Feb 2 21:39:55.232: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 36.98351627s Feb 2 21:39:57.277: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 39.027712135s Feb 2 21:39:59.321: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 41.071911306s Feb 2 21:40:01.366: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 43.11682666s Feb 2 21:40:03.411: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 45.162342284s Feb 2 21:40:05.455: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 47.206171293s Feb 2 21:40:07.502: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 49.253033363s Feb 2 21:40:09.548: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 51.299043304s Feb 2 21:40:11.591: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 53.342385666s Feb 2 21:40:13.636: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 55.386839118s Feb 2 21:40:15.680: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 57.431091312s Feb 2 21:40:17.725: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 59.476189714s Feb 2 21:40:19.770: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.521293779s Feb 2 21:40:21.814: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.565665946s Feb 2 21:40:23.859: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.609882017s Feb 2 21:40:25.903: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.653928542s Feb 2 21:40:27.954: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.705584085s Feb 2 21:40:30.000: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.75086872s Feb 2 21:40:32.044: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.794996547s Feb 2 21:40:34.088: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.838881928s Feb 2 21:40:36.132: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.882852554s Feb 2 21:40:38.176: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.926857672s Feb 2 21:40:40.220: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.971652242s Feb 2 21:40:42.264: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.01560714s Feb 2 21:40:44.308: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.059478599s Feb 2 21:40:46.352: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.103353347s Feb 2 21:40:48.396: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.146905925s Feb 2 21:40:50.441: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.191843437s Feb 2 21:40:52.484: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.234846299s Feb 2 21:40:54.529: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.279896801s Feb 2 21:40:56.573: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.323752766s Feb 2 21:40:58.618: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.369052567s Feb 2 21:41:00.664: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.414775272s Feb 2 21:41:02.709: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.460013125s Feb 2 21:41:04.755: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.50605637s Feb 2 21:41:06.799: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.550228016s Feb 2 21:41:08.844: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.594861358s Feb 2 21:41:10.888: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.639643347s Feb 2 21:41:12.933: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.684415918s Feb 2 21:41:14.979: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.730027345s Feb 2 21:41:17.023: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.773852523s Feb 2 21:41:19.066: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.816732303s Feb 2 21:41:21.110: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.861437763s Feb 2 21:41:23.156: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.906861812s Feb 2 21:41:25.199: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.949689743s Feb 2 21:41:27.243: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.993844952s Feb 2 21:41:29.287: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m11.038078759s Feb 2 21:41:31.331: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.081827513s Feb 2 21:41:33.376: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m15.126741475s Feb 2 21:41:35.420: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.171122898s Feb 2 21:41:37.464: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m19.215150808s Feb 2 21:41:39.510: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m21.261537146s Feb 2 21:41:41.555: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m23.30651749s Feb 2 21:41:43.599: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m25.350389511s Feb 2 21:41:45.644: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m27.39525906s Feb 2 21:41:47.688: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m29.439536967s Feb 2 21:41:49.732: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m31.483550756s Feb 2 21:41:51.776: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m33.527460155s Feb 2 21:41:53.820: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m35.571441777s Feb 2 21:41:55.865: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m37.616071852s Feb 2 21:41:57.908: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m39.659573255s Feb 2 21:41:59.952: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m41.703481031s Feb 2 21:42:01.996: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m43.747618061s Feb 2 21:42:04.041: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m45.792351553s Feb 2 21:42:06.091: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m47.842314858s Feb 2 21:42:08.135: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m49.886163938s Feb 2 21:42:10.180: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m51.931145235s Feb 2 21:42:12.225: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m53.976024484s Feb 2 21:42:14.269: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.019716405s Feb 2 21:42:16.312: INFO: Pod "host-process-command-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.063563905s Feb 2 21:42:18.314: FAIL: wait for pod "host-process-command-1" to finish running Expected success, but got an error: <*errors.errorString | 0xc0047efac0>: { s: "Gave up after waiting 3m0s for pod \"host-process-command-1\" to be \"Succeeded or Failed\"", } Gave up after waiting 3m0s for pod "host-process-command-1" to be "Succeeded or Failed" Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).WaitForFinish(0xc00288d800?, {0xc005d5f6e0, 0x16}, 0x0?) test/e2e/framework/pods.go:257 +0x1ae k8s.io/kubernetes/test/e2e/windows.glob..func7.4() test/e2e/windows/host_process.go:392 +0x16d5 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e5201?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000d124e0, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/framework/framework.go:188 �[1mSTEP�[0m: Collecting events from namespace "host-process-test-windows-4310". �[1mSTEP�[0m: Found 19 events. Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:01 +0000 UTC - event for host-process-command-0: {default-scheduler } Scheduled: Successfully assigned host-process-test-windows-4310/host-process-command-0 to e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:02 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:02 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Created: Created container host-process-command-0-0 Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:03 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Started: Started container host-process-command-0-0 Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:03 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:03 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Created: Created container host-process-command-0-1 Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:03 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Started: Started container host-process-command-0-1 Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:03 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:03 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Created: Created container host-process-command-0-2 Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:04 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Started: Started container host-process-command-0-2 Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:04 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:04 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Created: Created container host-process-command-0-3 Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:04 +0000 UTC - event for host-process-command-0: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Started: Started container host-process-command-0-3 Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:18 +0000 UTC - event for host-process-command-1: {default-scheduler } Scheduled: Successfully assigned host-process-test-windows-4310/host-process-command-1 to e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:42:18.360: INFO: At 2023-02-02 21:39:57 +0000 UTC - event for host-process-command-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Feb 2 21:42:18.360: INFO: At 2023-02-02 21:41:09 +0000 UTC - event for host-process-command-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Created: Created container host-process-command-1-0 Feb 2 21:42:18.360: INFO: At 2023-02-02 21:41:18 +0000 UTC - event for host-process-command-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Started: Started container host-process-command-1-0 Feb 2 21:42:18.360: INFO: At 2023-02-02 21:41:18 +0000 UTC - event for host-process-command-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Feb 2 21:42:18.360: INFO: At 2023-02-02 21:42:05 +0000 UTC - event for host-process-command-1: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Created: Created container host-process-command-1-1 Feb 2 21:42:18.405: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 21:42:18.405: INFO: host-process-command-0 e2e-7d89e54d79-37bac-windows-node-group-jllf Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:01 +0000 UTC PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:08 +0000 UTC PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:08 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:01 +0000 UTC }] Feb 2 21:42:18.405: INFO: host-process-command-1 e2e-7d89e54d79-37bac-windows-node-group-jllf Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:18 +0000 UTC ContainersNotReady containers with unready status: [host-process-command-1-0 host-process-command-1-1 host-process-command-1-2 host-process-command-1-3]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:18 +0000 UTC ContainersNotReady containers with unready status: [host-process-command-1-0 host-process-command-1-1 host-process-command-1-2 host-process-command-1-3]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:18 +0000 UTC }] Feb 2 21:42:18.405: INFO: Feb 2 21:42:18.695: INFO: Unable to fetch host-process-test-windows-4310/host-process-command-1/host-process-command-1-0 logs: the server rejected our request for an unknown reason (get pods host-process-command-1) Feb 2 21:42:18.743: INFO: Unable to fetch host-process-test-windows-4310/host-process-command-1/host-process-command-1-1 logs: the server rejected our request for an unknown reason (get pods host-process-command-1) Feb 2 21:42:18.790: INFO: Unable to fetch host-process-test-windows-4310/host-process-command-1/host-process-command-1-2 logs: the server rejected our request for an unknown reason (get pods host-process-command-1) Feb 2 21:42:18.847: INFO: Unable to fetch host-process-test-windows-4310/host-process-command-1/host-process-command-1-3 logs: the server rejected our request for an unknown reason (get pods host-process-command-1) Feb 2 21:42:18.898: INFO: Logging node info for node e2e-7d89e54d79-37bac-master Feb 2 21:42:18.941: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-master b403d958-aef5-4e5e-9b07-9812dc3e7d8b 4952 0 2023-02-02 21:23:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kubelet Update v1 2023-02-02 21:23:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3864313856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3602169856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:39:15 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:39:15 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:39:15 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:39:15 +0000 UTC,LastTransitionTime:2023-02-02 21:23:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.98.204,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e8df098cf83a91bc3c7c2a97ba5a41e9,SystemUUID:e8df098c-f83a-91bc-3c7c-2a97ba5a41e9,BootID:59df2086-4103-4b38-9939-c916841efb98,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:131733971,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:121342787,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:52751170,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:be60ef505fc80879eeb5d8bf3ad8bb1146b395afc2394584645e99431806c26c gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.12.0],SizeBytes:32705362,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:d863f7fd0da4392b9753dc6c9195a658e80d70e0be8c9adb410d77cf20b75c76 registry.k8s.io/kas-network-proxy/proxy-server:v0.0.35],SizeBytes:21985251,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:42:18.942: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-master Feb 2 21:42:18.985: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-master Feb 2 21:42:19.056: INFO: konnectivity-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.056: INFO: Container konnectivity-server-container ready: true, restart count 0 Feb 2 21:42:19.056: INFO: kube-apiserver-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.056: INFO: Container kube-apiserver ready: true, restart count 0 Feb 2 21:42:19.056: INFO: kube-scheduler-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.056: INFO: Container kube-scheduler ready: true, restart count 0 Feb 2 21:42:19.056: INFO: etcd-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.056: INFO: Container etcd-container ready: true, restart count 0 Feb 2 21:42:19.056: INFO: l7-lb-controller-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.056: INFO: Container l7-lb-controller ready: true, restart count 3 Feb 2 21:42:19.056: INFO: metadata-proxy-v0.1-fmxnz started at 2023-02-02 21:23:46 +0000 UTC (0+2 container statuses recorded) Feb 2 21:42:19.056: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:42:19.056: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:42:19.056: INFO: kube-controller-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.056: INFO: Container kube-controller-manager ready: true, restart count 2 Feb 2 21:42:19.056: INFO: etcd-server-events-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.056: INFO: Container etcd-container ready: true, restart count 0 Feb 2 21:42:19.056: INFO: kube-addon-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.056: INFO: Container kube-addon-manager ready: true, restart count 0 Feb 2 21:42:19.279: INFO: Latency metrics for node e2e-7d89e54d79-37bac-master Feb 2 21:42:19.279: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:42:19.323: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-1vp1 d81fe224-05dd-48a7-9693-e2f2826a1b97 5835 0 2023-02-02 21:23:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-1vp1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-02-02 21:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.5.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-02-02 21:23:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-1vp1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.5.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:41:29 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:41:29 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:41:29 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:41:29 +0000 UTC,LastTransitionTime:2023-02-02 21:23:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.7,},NodeAddress{Type:ExternalIP,Address:35.197.102.154,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ade76a2ad94b4b90c3b7ba811704d98c,SystemUUID:ade76a2a-d94b-4b90-c3b7-ba811704d98c,BootID:29452487-f38a-42cd-8605-aecb73730dd9,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0],SizeBytes:18952261,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:42:19.323: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:42:19.367: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:42:19.441: INFO: konnectivity-agent-mn5mq started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.441: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 21:42:19.441: INFO: volume-snapshot-controller-0 started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.441: INFO: Container volume-snapshot-controller ready: true, restart count 0 Feb 2 21:42:19.441: INFO: kube-dns-autoscaler-596f6cf79f-v76jk started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.441: INFO: Container autoscaler ready: true, restart count 0 Feb 2 21:42:19.441: INFO: metadata-proxy-v0.1-kmxp5 started at 2023-02-02 21:23:39 +0000 UTC (0+2 container statuses recorded) Feb 2 21:42:19.441: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:42:19.441: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:42:19.441: INFO: coredns-8c79ffd8b-rd5tr started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.441: INFO: Container coredns ready: true, restart count 0 Feb 2 21:42:19.441: INFO: l7-default-backend-8667cd4ffc-pgmnb started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.441: INFO: Container default-http-backend ready: true, restart count 0 Feb 2 21:42:19.441: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-1vp1 started at 2023-02-02 21:23:38 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.441: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 21:42:19.626: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:42:19.626: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:42:19.669: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-fhnf 9c0dcb7a-8a6b-4535-afe8-b62bf19173f7 5838 0 2023-02-02 21:23:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-fhnf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-02-02 21:23:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-fhnf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:39 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:41:31 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:41:31 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:41:31 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:41:31 +0000 UTC,LastTransitionTime:2023-02-02 21:23:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.6,},NodeAddress{Type:ExternalIP,Address:34.127.30.111,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8abcabc408b3bd715147992f3d5a5854,SystemUUID:8abcabc4-08b3-bd71-5147-992f3d5a5854,BootID:8d0324c2-172b-4c43-81ee-83b6878e11ee,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:42:19.670: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:42:19.715: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:42:19.774: INFO: metadata-proxy-v0.1-xl4fd started at 2023-02-02 21:23:40 +0000 UTC (0+2 container statuses recorded) Feb 2 21:42:19.774: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:42:19.774: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:42:19.774: INFO: konnectivity-agent-k667p started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.774: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 21:42:19.774: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-fhnf started at 2023-02-02 21:23:39 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.774: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 21:42:19.774: INFO: metrics-server-v0.5.2-6d6794c8cd-9vklc started at 2023-02-02 21:24:01 +0000 UTC (0+2 container statuses recorded) Feb 2 21:42:19.774: INFO: Container metrics-server ready: true, restart count 0 Feb 2 21:42:19.774: INFO: Container metrics-server-nanny ready: true, restart count 0 Feb 2 21:42:19.774: INFO: coredns-8c79ffd8b-4v5p9 started at 2023-02-02 21:23:54 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:19.774: INFO: Container coredns ready: true, restart count 0 Feb 2 21:42:19.942: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:42:19.943: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:42:19.986: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-jllf 1f67dcc8-9253-4e93-8b90-78810a8df879 4851 0 2023-02-02 21:29:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-jllf kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-jllf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.75.252,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-jllf,SystemUUID:23C88569-8B16-0615-6BFF-BB819EADA98A,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:42:19.986: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:42:20.030: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:42:20.078: INFO: host-process-command-1 started at 2023-02-02 21:39:18 +0000 UTC (0+4 container statuses recorded) Feb 2 21:42:20.078: INFO: Container host-process-command-1-0 ready: false, restart count 0 Feb 2 21:42:20.078: INFO: Container host-process-command-1-1 ready: false, restart count 0 Feb 2 21:42:20.078: INFO: Container host-process-command-1-2 ready: false, restart count 0 Feb 2 21:42:20.078: INFO: Container host-process-command-1-3 ready: false, restart count 0 Feb 2 21:42:20.078: INFO: host-process-command-0 started at 2023-02-02 21:39:01 +0000 UTC (0+4 container statuses recorded) Feb 2 21:42:20.078: INFO: Container host-process-command-0-0 ready: false, restart count 0 Feb 2 21:42:20.078: INFO: Container host-process-command-0-1 ready: false, restart count 0 Feb 2 21:42:20.078: INFO: Container host-process-command-0-2 ready: false, restart count 0 Feb 2 21:42:20.078: INFO: Container host-process-command-0-3 ready: false, restart count 0 Feb 2 21:42:20.078: INFO: externalsvc-zthbm started at 2023-02-02 21:39:22 +0000 UTC (0+1 container statuses recorded) Feb 2 21:42:20.078: INFO: Container externalsvc ready: false, restart count 0 Feb 2 21:43:23.124: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:43:23.124: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:43:23.175: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-k0qm 91cb59e3-df60-4007-bdc1-bb197e591e43 4252 0 2023-02-02 21:29:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-k0qm kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2023-02-02 21:29:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:30:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-k0qm,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:24 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:25 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:25 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:25 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:38:25 +0000 UTC,LastTransitionTime:2023-02-02 21:29:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.1.208,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-k0qm,SystemUUID:FC53E984-3141-4AB0-99D2-83726BB3072F,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:43:23.175: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:43:23.219: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:43:23.286: INFO: sample-crd-conversion-webhook-deployment-656754656d-859dh started at 2023-02-02 21:39:34 +0000 UTC (0+1 container statuses recorded) Feb 2 21:43:23.286: INFO: Container sample-crd-conversion-webhook ready: false, restart count 0 Feb 2 21:43:23.287: INFO: externalsvc-pnwkb started at 2023-02-02 21:39:22 +0000 UTC (0+1 container statuses recorded) Feb 2 21:43:23.287: INFO: Container externalsvc ready: false, restart count 0 Feb 2 21:43:53.223: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:43:53.223: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:43:53.267: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-q21f eef3ae47-aa0d-4af8-87e8-4c4de04eace2 4481 0 2023-02-02 21:29:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-q21f kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-q21f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:14 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:38:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.230.207,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-q21f,SystemUUID:B1BBE679-4138-5169-4472-E3B13289F193,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:43:53.267: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:43:53.310: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:43:53.377: INFO: pod-configmaps-a44ef9c3-f9e2-4587-b5c3-d3c378922d47 started at 2023-02-02 21:39:14 +0000 UTC (0+1 container statuses recorded) Feb 2 21:43:53.378: INFO: Container agnhost-container ready: true, restart count 0 Feb 2 21:43:53.608: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:43:53.608: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready �[1mSTEP�[0m: Destroying namespace "host-process-test-windows-4310" for this suite.
Find host-process-command-1 mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-windows\]\s\[Feature\:WindowsHostProcessContainers\]\s\[MinimumKubeletVersion\:1\.22\]\sHostProcess\scontainers\sshould\srun\sas\sa\sprocess\son\sthe\shost\/node$'
test/e2e/windows/host_process.go:88 Feb 2 21:51:16.827: Expected <v1.PodPhase>: Failed to equal <v1.PodPhase>: Succeeded test/e2e/windows/host_process.go:134from junit_01.xml
[BeforeEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/windows/host_process.go:81 [BeforeEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 21:51:08.168: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename host-process-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run as a process on the host/node test/e2e/windows/host_process.go:88 �[1mSTEP�[0m: selecting a Windows node Feb 2 21:51:08.522: INFO: Using node: e2e-7d89e54d79-37bac-windows-node-group-jllf �[1mSTEP�[0m: scheduling a pod with a container that verifies %COMPUTERNAME% matches selected node name �[1mSTEP�[0m: Waiting for pod to run Feb 2 21:51:08.569: INFO: Waiting up to 3m0s for pod "host-process-test-pod" in namespace "host-process-test-windows-7093" to be "Succeeded or Failed" Feb 2 21:51:08.614: INFO: Pod "host-process-test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 45.422406ms Feb 2 21:51:10.656: INFO: Pod "host-process-test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.087450782s Feb 2 21:51:12.699: INFO: Pod "host-process-test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.129864913s Feb 2 21:51:14.741: INFO: Pod "host-process-test-pod": Phase="Running", Reason="", readiness=false. Elapsed: 6.171982402s Feb 2 21:51:16.784: INFO: Pod "host-process-test-pod": Phase="Failed", Reason="", readiness=false. Elapsed: 8.215522275s Feb 2 21:51:16.784: INFO: Pod "host-process-test-pod" satisfied condition "Succeeded or Failed" �[1mSTEP�[0m: Then ensuring pod finished running successfully Feb 2 21:51:16.827: FAIL: Expected <v1.PodPhase>: Failed to equal <v1.PodPhase>: Succeeded Full Stack Trace k8s.io/kubernetes/test/e2e/windows.glob..func7.2() test/e2e/windows/host_process.go:134 +0x766 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e5201?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000d124e0, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/framework/framework.go:188 �[1mSTEP�[0m: Collecting events from namespace "host-process-test-windows-7093". �[1mSTEP�[0m: Found 3 events. Feb 2 21:51:16.869: INFO: At 2023-02-02 21:51:08 +0000 UTC - event for host-process-test-pod: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Feb 2 21:51:16.870: INFO: At 2023-02-02 21:51:08 +0000 UTC - event for host-process-test-pod: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Created: Created container computer-name-test Feb 2 21:51:16.870: INFO: At 2023-02-02 21:51:09 +0000 UTC - event for host-process-test-pod: {kubelet e2e-7d89e54d79-37bac-windows-node-group-jllf} Started: Started container computer-name-test Feb 2 21:51:16.912: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 21:51:16.912: INFO: host-process-test-pod e2e-7d89e54d79-37bac-windows-node-group-jllf Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:51:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:51:14 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:51:14 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:51:08 +0000 UTC }] Feb 2 21:51:16.912: INFO: Feb 2 21:51:17.018: INFO: Logging node info for node e2e-7d89e54d79-37bac-master Feb 2 21:51:17.061: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-master b403d958-aef5-4e5e-9b07-9812dc3e7d8b 8336 0 2023-02-02 21:23:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kubelet Update v1 2023-02-02 21:23:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3864313856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3602169856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:49:28 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:49:28 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:49:28 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:49:28 +0000 UTC,LastTransitionTime:2023-02-02 21:23:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.98.204,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e8df098cf83a91bc3c7c2a97ba5a41e9,SystemUUID:e8df098c-f83a-91bc-3c7c-2a97ba5a41e9,BootID:59df2086-4103-4b38-9939-c916841efb98,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:131733971,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:121342787,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:52751170,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:be60ef505fc80879eeb5d8bf3ad8bb1146b395afc2394584645e99431806c26c gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.12.0],SizeBytes:32705362,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:d863f7fd0da4392b9753dc6c9195a658e80d70e0be8c9adb410d77cf20b75c76 registry.k8s.io/kas-network-proxy/proxy-server:v0.0.35],SizeBytes:21985251,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:51:17.062: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-master Feb 2 21:51:17.106: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-master Feb 2 21:51:17.180: INFO: kube-addon-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.180: INFO: Container kube-addon-manager ready: true, restart count 0 Feb 2 21:51:17.180: INFO: kube-controller-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.180: INFO: Container kube-controller-manager ready: true, restart count 2 Feb 2 21:51:17.180: INFO: etcd-server-events-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.180: INFO: Container etcd-container ready: true, restart count 0 Feb 2 21:51:17.180: INFO: kube-scheduler-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.180: INFO: Container kube-scheduler ready: true, restart count 0 Feb 2 21:51:17.180: INFO: etcd-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.180: INFO: Container etcd-container ready: true, restart count 0 Feb 2 21:51:17.180: INFO: l7-lb-controller-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.180: INFO: Container l7-lb-controller ready: true, restart count 3 Feb 2 21:51:17.180: INFO: metadata-proxy-v0.1-fmxnz started at 2023-02-02 21:23:46 +0000 UTC (0+2 container statuses recorded) Feb 2 21:51:17.180: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:51:17.180: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:51:17.180: INFO: konnectivity-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.180: INFO: Container konnectivity-server-container ready: true, restart count 0 Feb 2 21:51:17.180: INFO: kube-apiserver-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.180: INFO: Container kube-apiserver ready: true, restart count 0 Feb 2 21:51:17.416: INFO: Latency metrics for node e2e-7d89e54d79-37bac-master Feb 2 21:51:17.416: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:51:17.459: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-1vp1 d81fe224-05dd-48a7-9693-e2f2826a1b97 8096 0 2023-02-02 21:23:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-1vp1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-02-02 21:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.5.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-02-02 21:23:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-1vp1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.5.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:46:36 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:46:36 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:46:36 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:46:36 +0000 UTC,LastTransitionTime:2023-02-02 21:23:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.7,},NodeAddress{Type:ExternalIP,Address:35.197.102.154,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ade76a2ad94b4b90c3b7ba811704d98c,SystemUUID:ade76a2a-d94b-4b90-c3b7-ba811704d98c,BootID:29452487-f38a-42cd-8605-aecb73730dd9,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0],SizeBytes:18952261,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:51:17.460: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:51:17.504: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:51:17.571: INFO: metadata-proxy-v0.1-kmxp5 started at 2023-02-02 21:23:39 +0000 UTC (0+2 container statuses recorded) Feb 2 21:51:17.571: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:51:17.571: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:51:17.571: INFO: kube-dns-autoscaler-596f6cf79f-v76jk started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.571: INFO: Container autoscaler ready: true, restart count 0 Feb 2 21:51:17.571: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-1vp1 started at 2023-02-02 21:23:38 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.571: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 21:51:17.571: INFO: coredns-8c79ffd8b-rd5tr started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.571: INFO: Container coredns ready: true, restart count 0 Feb 2 21:51:17.571: INFO: l7-default-backend-8667cd4ffc-pgmnb started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.571: INFO: Container default-http-backend ready: true, restart count 0 Feb 2 21:51:17.571: INFO: volume-snapshot-controller-0 started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.571: INFO: Container volume-snapshot-controller ready: true, restart count 0 Feb 2 21:51:17.571: INFO: konnectivity-agent-mn5mq started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.571: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 21:51:17.754: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:51:17.754: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:51:17.797: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-fhnf 9c0dcb7a-8a6b-4535-afe8-b62bf19173f7 8095 0 2023-02-02 21:23:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-fhnf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-02-02 21:23:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-fhnf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 21:48:46 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:39 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:46:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:46:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:46:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:46:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.6,},NodeAddress{Type:ExternalIP,Address:34.127.30.111,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8abcabc408b3bd715147992f3d5a5854,SystemUUID:8abcabc4-08b3-bd71-5147-992f3d5a5854,BootID:8d0324c2-172b-4c43-81ee-83b6878e11ee,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:51:17.797: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:51:17.875: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:51:17.937: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-fhnf started at 2023-02-02 21:23:39 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.937: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 21:51:17.937: INFO: metrics-server-v0.5.2-6d6794c8cd-9vklc started at 2023-02-02 21:24:01 +0000 UTC (0+2 container statuses recorded) Feb 2 21:51:17.937: INFO: Container metrics-server ready: true, restart count 0 Feb 2 21:51:17.937: INFO: Container metrics-server-nanny ready: true, restart count 0 Feb 2 21:51:17.937: INFO: coredns-8c79ffd8b-4v5p9 started at 2023-02-02 21:23:54 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.937: INFO: Container coredns ready: true, restart count 0 Feb 2 21:51:17.937: INFO: metadata-proxy-v0.1-xl4fd started at 2023-02-02 21:23:40 +0000 UTC (0+2 container statuses recorded) Feb 2 21:51:17.937: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:51:17.937: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:51:17.937: INFO: konnectivity-agent-k667p started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:17.937: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 21:51:18.113: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:51:18.113: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:51:18.157: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-jllf 1f67dcc8-9253-4e93-8b90-78810a8df879 7323 0 2023-02-02 21:29:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-jllf kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-jllf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:46:41 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:46:41 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:46:41 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:46:41 +0000 UTC,LastTransitionTime:2023-02-02 21:29:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.75.252,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-jllf,SystemUUID:23C88569-8B16-0615-6BFF-BB819EADA98A,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:51:18.157: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:51:18.202: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:51:18.257: INFO: termination-message-container73ea76c3-3c2c-4a5c-9384-38f5ecb6a318 started at 2023-02-02 21:49:37 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:18.258: INFO: Container termination-message-container ready: false, restart count 0 Feb 2 21:51:18.258: INFO: host-process-test-pod started at 2023-02-02 21:51:08 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:18.258: INFO: Container computer-name-test ready: false, restart count 0 Feb 2 21:51:18.438: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:51:18.438: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:51:18.480: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-k0qm 91cb59e3-df60-4007-bdc1-bb197e591e43 8069 0 2023-02-02 21:29:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-k0qm kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2023-02-02 21:29:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:30:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-k0qm,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:24 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:48:38 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:48:38 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:48:38 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:48:38 +0000 UTC,LastTransitionTime:2023-02-02 21:29:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.1.208,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-k0qm,SystemUUID:FC53E984-3141-4AB0-99D2-83726BB3072F,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:51:18.480: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:51:18.523: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:51:18.568: INFO: test-webserver-3e306a57-1907-4a64-9a98-7f83ae617cab started at 2023-02-02 21:50:36 +0000 UTC (0+1 container statuses recorded) Feb 2 21:51:18.568: INFO: Container test-webserver ready: false, restart count 0 Feb 2 21:51:18.568: INFO: dns-test-dae0c1a6-25f9-4528-9bc3-5483c2b47694 started at 2023-02-02 21:51:10 +0000 UTC (0+3 container statuses recorded) Feb 2 21:51:18.568: INFO: Container jessie-querier ready: true, restart count 0 Feb 2 21:51:18.568: INFO: Container querier ready: true, restart count 0 Feb 2 21:51:18.568: INFO: Container webserver ready: true, restart count 0 Feb 2 21:51:18.823: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:51:18.823: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:51:18.866: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-q21f eef3ae47-aa0d-4af8-87e8-4c4de04eace2 8156 0 2023-02-02 21:29:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-q21f kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-q21f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:14 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:49:01 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:49:01 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:49:01 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:49:01 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.230.207,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-q21f,SystemUUID:B1BBE679-4138-5169-4472-E3B13289F193,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:51:18.866: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:51:18.911: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:51:19.139: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:51:19.139: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready �[1mSTEP�[0m: Destroying namespace "host-process-test-windows-7093" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-windows\]\s\[Feature\:WindowsHostProcessContainers\]\s\[MinimumKubeletVersion\:1\.22\]\sHostProcess\scontainers\sshould\ssupport\svarious\svolume\smount\stypes$'
test/e2e/windows/host_process.go:422 Feb 2 21:39:14.309: Expected <v1.PodPhase>: Failed to equal <v1.PodPhase>: Succeeded test/e2e/windows/host_process.go:480from junit_04.xml
[BeforeEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/windows/host_process.go:81 [BeforeEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Feb 2 21:39:02.082: INFO: >>> kubeConfig: /workspace/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename host-process-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support various volume mount types test/e2e/windows/host_process.go:422 �[1mSTEP�[0m: Creating a configmap containing test data and a validation script �[1mSTEP�[0m: Creating a secret containing test data �[1mSTEP�[0m: Creating a pod with a HostProcess container that uses various types of volume mounts �[1mSTEP�[0m: Waiting for pod to run Feb 2 21:39:02.513: INFO: Waiting up to 3m0s for pod "host-process-volume-mounts" in namespace "host-process-test-windows-5416" to be "Succeeded or Failed" Feb 2 21:39:02.555: INFO: Pod "host-process-volume-mounts": Phase="Pending", Reason="", readiness=false. Elapsed: 42.409995ms Feb 2 21:39:04.600: INFO: Pod "host-process-volume-mounts": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087288645s Feb 2 21:39:06.651: INFO: Pod "host-process-volume-mounts": Phase="Running", Reason="", readiness=true. Elapsed: 4.138553433s Feb 2 21:39:08.695: INFO: Pod "host-process-volume-mounts": Phase="Running", Reason="", readiness=true. Elapsed: 6.182385304s Feb 2 21:39:10.741: INFO: Pod "host-process-volume-mounts": Phase="Running", Reason="", readiness=false. Elapsed: 8.227775077s Feb 2 21:39:12.868: INFO: Pod "host-process-volume-mounts": Phase="Failed", Reason="", readiness=false. Elapsed: 10.355625235s Feb 2 21:39:12.868: INFO: Pod "host-process-volume-mounts" satisfied condition "Succeeded or Failed" Feb 2 21:39:14.163: INFO: Container logs: NODE_NAME_TEST env var does not equal COMPUTERNAME At C:\C\31b0feb0a17373d1d4f64045ea930037e37d02dfe30445066c6034ef72c8d5d2\etc\configmap\validationscript.ps1:30 char:3 + throw "NODE_NAME_TEST env var does not equal COMPUTERNAME" + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : OperationStopped: (NODE_NAME_TEST ...al COMPUTERNAME:String) [], RuntimeException + FullyQualifiedErrorId : NODE_NAME_TEST env var does not equal COMPUTERNAME �[1mSTEP�[0m: Then ensuring pod finished running successfully Feb 2 21:39:14.309: FAIL: Expected <v1.PodPhase>: Failed to equal <v1.PodPhase>: Succeeded Full Stack Trace k8s.io/kubernetes/test/e2e/windows.glob..func7.5() test/e2e/windows/host_process.go:480 +0x6e6 k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000ae2680, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers test/e2e/framework/framework.go:188 �[1mSTEP�[0m: Collecting events from namespace "host-process-test-windows-5416". �[1mSTEP�[0m: Found 4 events. Feb 2 21:39:14.516: INFO: At 2023-02-02 21:39:02 +0000 UTC - event for host-process-volume-mounts: {default-scheduler } Scheduled: Successfully assigned host-process-test-windows-5416/host-process-volume-mounts to e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:39:14.516: INFO: At 2023-02-02 21:39:02 +0000 UTC - event for host-process-volume-mounts: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Feb 2 21:39:14.516: INFO: At 2023-02-02 21:39:03 +0000 UTC - event for host-process-volume-mounts: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Created: Created container host-process-volume-mounts Feb 2 21:39:14.516: INFO: At 2023-02-02 21:39:03 +0000 UTC - event for host-process-volume-mounts: {kubelet e2e-7d89e54d79-37bac-windows-node-group-k0qm} Started: Started container host-process-volume-mounts Feb 2 21:39:14.610: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 21:39:14.610: INFO: host-process-volume-mounts e2e-7d89e54d79-37bac-windows-node-group-k0qm Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:08 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:08 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-02-02 21:39:02 +0000 UTC }] Feb 2 21:39:14.610: INFO: Feb 2 21:39:14.925: INFO: Logging node info for node e2e-7d89e54d79-37bac-master Feb 2 21:39:14.970: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-master b403d958-aef5-4e5e-9b07-9812dc3e7d8b 2639 0 2023-02-02 21:23:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kubelet Update v1 2023-02-02 21:23:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3864313856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3602169856 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:35 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:34:08 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:34:08 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:34:08 +0000 UTC,LastTransitionTime:2023-02-02 21:23:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:34:08 +0000 UTC,LastTransitionTime:2023-02-02 21:23:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.2,},NodeAddress{Type:ExternalIP,Address:35.247.98.204,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-master.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e8df098cf83a91bc3c7c2a97ba5a41e9,SystemUUID:e8df098c-f83a-91bc-3c7c-2a97ba5a41e9,BootID:59df2086-4103-4b38-9939-c916841efb98,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:131733971,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:121342787,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:52751170,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:be60ef505fc80879eeb5d8bf3ad8bb1146b395afc2394584645e99431806c26c gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.12.0],SizeBytes:32705362,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:d863f7fd0da4392b9753dc6c9195a658e80d70e0be8c9adb410d77cf20b75c76 registry.k8s.io/kas-network-proxy/proxy-server:v0.0.35],SizeBytes:21985251,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:39:14.970: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-master Feb 2 21:39:15.027: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-master Feb 2 21:39:15.119: INFO: kube-controller-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:15.119: INFO: Container kube-controller-manager ready: true, restart count 2 Feb 2 21:39:15.119: INFO: etcd-server-events-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:15.119: INFO: Container etcd-container ready: true, restart count 0 Feb 2 21:39:15.119: INFO: kube-addon-manager-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:15.119: INFO: Container kube-addon-manager ready: true, restart count 0 Feb 2 21:39:15.119: INFO: etcd-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:15.119: INFO: Container etcd-container ready: true, restart count 0 Feb 2 21:39:15.119: INFO: l7-lb-controller-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:58 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:15.119: INFO: Container l7-lb-controller ready: true, restart count 3 Feb 2 21:39:15.119: INFO: metadata-proxy-v0.1-fmxnz started at 2023-02-02 21:23:46 +0000 UTC (0+2 container statuses recorded) Feb 2 21:39:15.119: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:39:15.119: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:39:15.119: INFO: konnectivity-server-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:15.119: INFO: Container konnectivity-server-container ready: true, restart count 0 Feb 2 21:39:15.119: INFO: kube-apiserver-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:15.119: INFO: Container kube-apiserver ready: true, restart count 0 Feb 2 21:39:15.119: INFO: kube-scheduler-e2e-7d89e54d79-37bac-master started at 2023-02-02 21:22:23 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:15.119: INFO: Container kube-scheduler ready: true, restart count 0 Feb 2 21:39:15.420: INFO: Latency metrics for node e2e-7d89e54d79-37bac-master Feb 2 21:39:15.420: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:39:15.474: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-1vp1 d81fe224-05dd-48a7-9693-e2f2826a1b97 4459 0 2023-02-02 21:23:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-1vp1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-02-02 21:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.5.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-02-02 21:23:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-1vp1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.5.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:43 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:38 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:36:23 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:36:23 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:36:23 +0000 UTC,LastTransitionTime:2023-02-02 21:23:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:36:23 +0000 UTC,LastTransitionTime:2023-02-02 21:23:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.7,},NodeAddress{Type:ExternalIP,Address:35.197.102.154,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-1vp1.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ade76a2ad94b4b90c3b7ba811704d98c,SystemUUID:ade76a2a-d94b-4b90-c3b7-ba811704d98c,BootID:29452487-f38a-42cd-8605-aecb73730dd9,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0],SizeBytes:18952261,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 k8s.gcr.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:39:15.475: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:39:15.525: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:39:16.601: INFO: l7-default-backend-8667cd4ffc-pgmnb started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container default-http-backend ready: true, restart count 0 Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-zc5r5 started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-ql2s9 started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-g45zr started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-spbrd started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-z2kj9 started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-1vp1 started at 2023-02-02 21:23:38 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 21:39:16.601: INFO: coredns-8c79ffd8b-rd5tr started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container coredns ready: true, restart count 0 Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-hjmgs started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-d2z2r started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-2f5qk started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-flssr started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-l7hlk started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-7qr2g started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: volume-snapshot-controller-0 started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container volume-snapshot-controller ready: true, restart count 0 Feb 2 21:39:16.601: INFO: konnectivity-agent-mn5mq started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-mb4hg started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-fgg9p started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-5kk4b started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: kube-dns-autoscaler-596f6cf79f-v76jk started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container autoscaler ready: true, restart count 0 Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-mk8hz started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-fdk6f started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-hl27q started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-k5s4s started at <nil> (0+0 container statuses recorded) Feb 2 21:39:16.601: INFO: metadata-proxy-v0.1-kmxp5 started at 2023-02-02 21:23:39 +0000 UTC (0+2 container statuses recorded) Feb 2 21:39:16.601: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:39:16.601: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:39:16.601: INFO: simpletest-rc-to-be-deleted-4lkr2 started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:16.601: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:17.486: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-1vp1 Feb 2 21:39:17.486: INFO: Logging node info for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:39:17.529: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-minion-group-fhnf 9c0dcb7a-8a6b-4535-afe8-b62bf19173f7 4457 0 2023-02-02 21:23:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-minion-group-fhnf kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-4 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-02-02 21:23:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-02-02 21:23:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-02-02 21:24:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-minion-group-fhnf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15735660544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{15473516544 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-02-02 21:38:44 +0000 UTC,LastTransitionTime:2023-02-02 21:23:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:23:39 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:36:25 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:36:25 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:36:25 +0000 UTC,LastTransitionTime:2023-02-02 21:23:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:36:25 +0000 UTC,LastTransitionTime:2023-02-02 21:23:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.6,},NodeAddress{Type:ExternalIP,Address:34.127.30.111,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-minion-group-fhnf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8abcabc408b3bd715147992f3d5a5854,SystemUUID:8abcabc4-08b3-bd71-5147-992f3d5a5854,BootID:8d0324c2-172b-4c43-81ee-83b6878e11ee,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.24.11-rc.0.11_73da4d3652771d],SizeBytes:112212023,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 k8s.gcr.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:8970dca5c4c9df1d566c3c3c91ef2e743e410a8623d42062eb48e7245f1eef93 registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.35],SizeBytes:8488019,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:39:17.530: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:39:17.573: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-q2vxh started at <nil> (0+0 container statuses recorded) Feb 2 21:39:17.960: INFO: kube-proxy-e2e-7d89e54d79-37bac-minion-group-fhnf started at 2023-02-02 21:23:39 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 21:39:17.960: INFO: metrics-server-v0.5.2-6d6794c8cd-9vklc started at 2023-02-02 21:24:01 +0000 UTC (0+2 container statuses recorded) Feb 2 21:39:17.960: INFO: Container metrics-server ready: true, restart count 0 Feb 2 21:39:17.960: INFO: Container metrics-server-nanny ready: true, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-f7569 started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-v6mcb started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-gn68n started at <nil> (0+0 container statuses recorded) Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-bz4cm started at <nil> (0+0 container statuses recorded) Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-mzqxm started at <nil> (0+0 container statuses recorded) Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-6pntx started at <nil> (0+0 container statuses recorded) Feb 2 21:39:17.960: INFO: coredns-8c79ffd8b-4v5p9 started at 2023-02-02 21:23:54 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container coredns ready: true, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-dbdhb started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-t6fgq started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-fhrqv started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-gqtkk started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-mdxth started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-r89jm started at <nil> (0+0 container statuses recorded) Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-swgln started at <nil> (0+0 container statuses recorded) Feb 2 21:39:17.960: INFO: metadata-proxy-v0.1-xl4fd started at 2023-02-02 21:23:40 +0000 UTC (0+2 container statuses recorded) Feb 2 21:39:17.960: INFO: Container metadata-proxy ready: true, restart count 0 Feb 2 21:39:17.960: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Feb 2 21:39:17.960: INFO: konnectivity-agent-k667p started at 2023-02-02 21:23:49 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container konnectivity-agent ready: true, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-s45wr started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-9zd4j started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:17.960: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-5zcjq started at <nil> (0+0 container statuses recorded) Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-z8sjf started at <nil> (0+0 container statuses recorded) Feb 2 21:39:17.960: INFO: simpletest-rc-to-be-deleted-wj6bx started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.415: INFO: Latency metrics for node e2e-7d89e54d79-37bac-minion-group-fhnf Feb 2 21:39:18.415: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:39:18.460: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-jllf 1f67dcc8-9253-4e93-8b90-78810a8df879 4851 0 2023-02-02 21:29:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-jllf kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-jllf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:09 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:39:13 +0000 UTC,LastTransitionTime:2023-02-02 21:29:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.4,},NodeAddress{Type:ExternalIP,Address:34.83.75.252,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-jllf,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-jllf,SystemUUID:23C88569-8B16-0615-6BFF-BB819EADA98A,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:39:18.460: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:39:18.503: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-r4qcw started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-n78pt started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:18.560: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-zfr8l started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:18.560: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-gczvm started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:18.560: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-2rggd started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-55kds started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-7pmgt started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-nz8sp started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-zdbr9 started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-ck4hf started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:18.560: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-5q8h5 started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:18.560: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-jppq4 started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:18.560: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:18.560: INFO: host-process-command-1 started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-6bd2z started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:18.560: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-jzfdx started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:18.560: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-dnr2s started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-w59xq started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-gzvps started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-nfl9k started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-ql85z started at <nil> (0+0 container statuses recorded) Feb 2 21:39:18.560: INFO: host-process-command-0 started at 2023-02-02 21:39:01 +0000 UTC (0+4 container statuses recorded) Feb 2 21:39:18.560: INFO: Container host-process-command-0-0 ready: false, restart count 0 Feb 2 21:39:18.560: INFO: Container host-process-command-0-1 ready: false, restart count 0 Feb 2 21:39:18.560: INFO: Container host-process-command-0-2 ready: false, restart count 0 Feb 2 21:39:18.560: INFO: Container host-process-command-0-3 ready: false, restart count 0 Feb 2 21:39:18.560: INFO: simpletest-rc-to-be-deleted-wpvnm started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:18.560: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.364: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-jllf Feb 2 21:39:19.364: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:39:19.409: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-k0qm 91cb59e3-df60-4007-bdc1-bb197e591e43 4252 0 2023-02-02 21:29:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-k0qm kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet.exe Update v1 2023-02-02 21:29:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:30:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-k0qm,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:24 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:25 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:25 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:25 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:38:25 +0000 UTC,LastTransitionTime:2023-02-02 21:29:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.1.208,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-k0qm,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-k0qm,SystemUUID:FC53E984-3141-4AB0-99D2-83726BB3072F,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:39:19.410: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:39:19.536: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-vdt5g started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-9q5w6 started at <nil> (0+0 container statuses recorded) Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-2pbgw started at 2023-02-02 21:39:13 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-qwbnq started at <nil> (0+0 container statuses recorded) Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-ctsdh started at <nil> (0+0 container statuses recorded) Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-pkv6c started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-r24wr started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-h286d started at <nil> (0+0 container statuses recorded) Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-g7978 started at <nil> (0+0 container statuses recorded) Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-s6sn5 started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-vzvw5 started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-g984g started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-pv4p2 started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-pkm2r started at <nil> (0+0 container statuses recorded) Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-7zv5d started at <nil> (0+0 container statuses recorded) Feb 2 21:39:19.800: INFO: host-process-volume-mounts started at 2023-02-02 21:39:02 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container host-process-volume-mounts ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-5wvsm started at 2023-02-02 21:39:14 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-kk6lm started at 2023-02-02 21:39:14 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-zzqbz started at 2023-02-02 21:39:15 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-59p6m started at 2023-02-02 21:39:15 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-wfxrr started at <nil> (0+0 container statuses recorded) Feb 2 21:39:19.800: INFO: simpletest-rc-to-be-deleted-ctq67 started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:19.800: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:20.518: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-k0qm Feb 2 21:39:20.518: INFO: Logging node info for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:39:20.562: INFO: Node Info: &Node{ObjectMeta:{e2e-7d89e54d79-37bac-windows-node-group-q21f eef3ae47-aa0d-4af8-87e8-4c4de04eace2 4481 0 2023-02-02 21:29:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-4 beta.kubernetes.io/os:windows failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:e2e-7d89e54d79-37bac-windows-node-group-q21f kubernetes.io/os:windows node.kubernetes.io/instance-type:n1-standard-4 node.kubernetes.io/windows-build:10.0.17763 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet.exe Update v1 2023-02-02 21:29:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:node.kubernetes.io/windows-build":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet.exe Update v1 2023-02-02 21:30:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://kube-gce-upg-1-4-1-5-upg-mas/us-west1-b/e2e-7d89e54d79-37bac-windows-node-group-q21f,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{107252527104 0} {<nil>} 104738796Ki BinarySI},memory: {{16102309888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{96527274234 0} {<nil>} 96527274234 DecimalSI},memory: {{15840165888 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-02-02 21:29:14 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:RouteCreated,Message:NodeController create implicit route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-02 21:38:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-02 21:38:47 +0000 UTC,LastTransitionTime:2023-02-02 21:29:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.40.0.5,},NodeAddress{Type:ExternalIP,Address:34.168.230.207,},NodeAddress{Type:InternalDNS,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f.c.kube-gce-upg-1-4-1-5-upg-mas.internal,},NodeAddress{Type:Hostname,Address:e2e-7d89e54d79-37bac-windows-node-group-q21f,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e2e-7d89e54d79-37bac-windows-node-group-q21f,SystemUUID:B1BBE679-4138-5169-4472-E3B13289F193,BootID:9,KernelVersion:10.0.17763.2183,OSImage:Windows Server 2019 Datacenter,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.24.11-rc.0.11+73da4d3652771d,KubeProxyVersion:v1.24.11-rc.0.11+73da4d3652771d,OperatingSystem:windows,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:205990572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:204397145,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:203697351,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:203202672,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:179603451,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:168375500,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:166539683,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:104484632,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:102745583,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 2 21:39:20.562: INFO: Logging kubelet events for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:39:20.613: INFO: Logging pods the kubelet thinks is on node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-c2fhb started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-kt4zz started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-nxpw2 started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-ccwqp started at 2023-02-02 21:39:13 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-h48qw started at <nil> (0+0 container statuses recorded) Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-rxszg started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-z56g2 started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-5wsgv started at <nil> (0+0 container statuses recorded) Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-vnnsg started at 2023-02-02 21:39:14 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-x6qwf started at <nil> (0+0 container statuses recorded) Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-gtdxg started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-sfgdc started at 2023-02-02 21:39:13 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-d9z8j started at <nil> (0+0 container statuses recorded) Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-tbf4m started at <nil> (0+0 container statuses recorded) Feb 2 21:39:21.306: INFO: pod-configmaps-a44ef9c3-f9e2-4587-b5c3-d3c378922d47 started at <nil> (0+0 container statuses recorded) Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-xngrc started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-rxgg2 started at <nil> (0+0 container statuses recorded) Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-h9qv6 started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-qq9pt started at 2023-02-02 21:39:12 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-6h84m started at <nil> (0+0 container statuses recorded) Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-twchg started at 2023-02-02 21:39:15 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.306: INFO: simpletest-rc-to-be-deleted-hmx8g started at 2023-02-02 21:39:11 +0000 UTC (0+1 container statuses recorded) Feb 2 21:39:21.306: INFO: Container nginx ready: false, restart count 0 Feb 2 21:39:21.811: INFO: Latency metrics for node e2e-7d89e54d79-37bac-windows-node-group-q21f Feb 2 21:39:21.811: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready �[1mSTEP�[0m: Destroying namespace "host-process-test-windows-5416" for this suite.
Filter through log files | View test history on testgrid
error during /home/prow/go/src/sigs.k8s.io/windows-testing/gce/run-e2e.sh --node-os-distro=windows -prepull-images=true --ginkgo.focus=\[Conformance\]|\[NodeConformance\]|\[sig-windows\]|\[Feature:Windows\] --ginkgo.skip=\[LinuxOnly\]|\[Serial\]|\[alpha\]|\[Slow\]|\[GMSA\]|Guestbook.application.should.create.and.stop.a.working.application|device.plugin.for.Windows|\[sig-api-machinery\].Aggregator|\[Driver:.windows-gcepd\]: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]
Kubernetes e2e suite [sig-windows] Hybrid cluster network for all supported CNIs should have stable networking for Linux and Windows pods
Kubernetes e2e suite [sig-windows] Hybrid cluster network for all supported CNIs should provide Internet connection for Linux containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [sig-windows] Hybrid cluster network for all supported CNIs should provide Internet connection for Windows containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers metrics should report count of started and failed to start HostProcess containers
Kubernetes e2e suite [sig-windows] [Feature:WindowsHostProcessContainers] [MinimumKubeletVersion:1.22] HostProcess containers should support init containers
Kubernetes e2e suite [sig-windows] [Feature:Windows] DNS should support configurable pod DNS servers
Kubernetes e2e suite [sig-windows] [Feature:Windows] Kubelet-Stats Kubelet stats collection for Windows nodes when running 3 pods should return within 10 seconds
Kubernetes e2e suite [sig-windows] [Feature:Windows] Kubelet-Stats Kubelet stats collection for Windows nodes when windows is booted should return bootid within 10 seconds
Kubernetes e2e suite [sig-windows] [Feature:Windows] SecurityContext should be able create pods and run containers with a given username
Kubernetes e2e suite [sig-windows] [Feature:Windows] SecurityContext should ignore Linux Specific SecurityContext if set
Kubernetes e2e suite [sig-windows] [Feature:Windows] SecurityContext should not be able to create pods with containers running as CONTAINERADMINISTRATOR when runAsNonRoot is true
Kubernetes e2e suite [sig-windows] [Feature:Windows] SecurityContext should not be able to create pods with containers running as ContainerAdministrator when runAsNonRoot is true
Kubernetes e2e suite [sig-windows] [Feature:Windows] SecurityContext should not be able to create pods with unknown usernames at Container level
Kubernetes e2e suite [sig-windows] [Feature:Windows] SecurityContext should not be able to create pods with unknown usernames at Pod level
Kubernetes e2e suite [sig-windows] [Feature:Windows] SecurityContext should override SecurityContext username if set
Kubernetes e2e suite [sig-windows] [Feature:Windows] Windows volume mounts check volume mount permissions container should have readOnly permissions on emptyDir
Kubernetes e2e suite [sig-windows] [Feature:Windows] Windows volume mounts check volume mount permissions container should have readOnly permissions on hostMapPath
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest diffResources
kubetest list nodes
kubetest listResources After
kubetest listResources Before
kubetest listResources Down
kubetest listResources Up
kubetest test setup
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin][Alpha][Feature:CustomResourceValidationExpressions] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validator rules
Kubernetes e2e suite [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin][Alpha][Feature:CustomResourceValidationExpressions] MUST fail create of a custom resource definition that contains a x-kubernetes-validator rule that refers to a property that do not exist
Kubernetes e2e suite [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin][Alpha][Feature:CustomResourceValidationExpressions] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validator rules
Kubernetes e2e suite [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [sig-apps] Job should manage the lifecycle of a job
Kubernetes e2e suite [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should allow pods under the privileged policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should enforce the restricted policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should forbid pod creation when no PSP is available
Kubernetes e2e suite [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Object from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver with Prometheus [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target average value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two External metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two metrics of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should prevent Ingress creation if more than 1 IngressClass marked as default [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to create a ClusterIP service
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to switch between IG and NEG modes
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints to NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy API with endport field [Feature:NetworkPolicyEndPort]
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints
Kubernetes e2e suite [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports
Kubernetes e2e suite [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort
Kubernetes e2e suite [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to up and down services
Kubernetes e2e suite [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] will start an ephemeral container in an existing pod
Kubernetes e2e suite [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxO