This job view page is being replaced by Spyglass soon. Check out the new job view.
PRkolyshkin: Update runc to 1.0.0
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2021-06-23 13:52
Elapsed2h16m
Revision
Builder4a347728-d42a-11eb-8cb1-ceb3c9951315
Refs master:7b24c7e4
102508:3246ca75
infra-commitd795de87c
job-versionv1.22.0-beta.0.29+3b2a5902bf90d3
kubetest-version
repok8s.io/kubernetes
repo-commit3b2a5902bf90d32cef5cb03202932fd80b9a0dfc
repos{u'k8s.io/kubernetes': u'master:7b24c7e4a7a644bd9c4aa173d59fd5bdcddc8652,102508:3246ca7554fa69e3358c0ff0b0324d4da1447053'}
revisionv1.22.0-beta.0.29+3b2a5902bf90d3

Test Failures


kubetest Node Tests 2h14m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=7h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1-serial.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 345 lines ...
W0623 13:54:53.864]   "ignition": {
W0623 13:54:53.864]     "version": "3.1.0"
W0623 13:54:53.864]   },
W0623 13:54:53.864]   "systemd": {
W0623 13:54:53.864]     "units": [
W0623 13:54:53.865]       {
W0623 13:54:53.866]         "contents": "[Unit]\nDescription=Download and install crio binaries and configurations.\nAfter=network-online.target\n\n[Service]\nType=oneshot\nExecStartPre=/usr/sbin/setenforce 1\nExecStartPre=/usr/bin/bash -c '/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -o /usr/local/crio-install.sh  https://raw.githubusercontent.com/cri-o/cri-o/master/scripts/get'\nExecStartPre=/usr/bin/bash /usr/local/crio-install.sh\nExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet\nExecStartPre=/usr/bin/chcon -R -u system_u -r object_r -t var_lib_t /var/lib/kubelet\nExecStartPre=/usr/bin/mount /tmp /tmp -o remount,exec,suid\nExecStartPre=/usr/bin/chcon -u system_u -r object_r -t container_runtime_exec_t /usr/local/bin/crio /usr/local/bin/crio-status /usr/local/bin/runc /usr/local/bin/crun\nExecStartPre=/usr/bin/chcon -u system_u -r object_r -t bin_t /usr/local/bin/conmon /usr/local/bin/crictl /usr/local/bin/pinns\nExecStartPre=/usr/bin/chcon -R -u system_u -r object_r -t bin_t /opt/cni/bin/\nExecStartPre=/usr/bin/rm -f  /etc/cni/net.d/87-podman-bridge.conflist\nExecStartPre=/usr/bin/bash -c 'echo -e \"[crio.runtime]\\n  default_runtime = \\\\\\\"runc\\\\\\\"\\n[crio.runtime.runtimes]\\n  [crio.runtime.runtimes.runc]\\n    runtime_path=\\\\\\\"/usr/local/bin/runc\\\\\\\"\" \u003e /etc/crio/crio.conf.d/20-runc.conf'\nExecStartPre=/usr/bin/bash -c 'echo -e \"[crio.runtime]\\n  [crio.runtime.runtimes]\\n  [crio.runtime.runtimes.test-handler]\\n    runtime_path=\\\\\\\"/usr/local/bin/crun\\\\\\\"\" \u003e /etc/crio/crio.conf.d/10-crun.conf'\nExecStartPre=/usr/bin/chcon -R -u system_u -r object_r -t container_config_t /etc/crio /etc/crio/crio.conf /usr/local/share/oci-umount/oci-umount.d/crio-umount.conf\nExecStartPre=/usr/bin/systemctl enable crio.service\nExecStartPre=/usr/bin/chcon -R -u system_u -r object_r -t systemd_unit_file_t /usr/local/lib/systemd/system/crio.service\nExecStart=/usr/bin/systemctl start crio.service\n\n[Install]\nWantedBy=multi-user.target\n",
W0623 13:54:53.866]         "enabled": true,
W0623 13:54:53.866]         "name": "crio-install.service"
W0623 13:54:53.867]       }
W0623 13:54:53.867]     ]
W0623 13:54:53.867]   }
W0623 13:54:53.867] }
... skipping 4 lines ...
I0623 13:54:53.968] make: Entering directory '/go/src/k8s.io/kubernetes'
I0623 13:54:53.969] make[1]: Entering directory '/go/src/k8s.io/kubernetes'
W0623 13:54:54.112] I0623 13:54:54.112226    6143 run_remote.go:579] Creating instance {image:fedora-coreos-34-20210529-3-0-gcp-x86-64 imageDesc:fedora-coreos-34-20210529-3-0-gcp-x86-64 kernelArguments:[] project:fedora-coreos-cloud resources:{Accelerators:[]} metadata:0xc0004d7180 machine:n1-standard-2 tests:[]} with service account "1046294573453-compute@developer.gserviceaccount.com"
I0623 13:55:04.171] +++ [0623 13:55:04] Building go targets for linux/amd64:
I0623 13:55:04.172]     ./vendor/k8s.io/code-generator/cmd/prerelease-lifecycle-gen
I0623 13:55:13.529] Generating prerelease lifecycle code for 27 targets
W0623 13:55:15.802] I0623 13:55:15.802420    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e docker -e containerd -e crio']
I0623 13:55:15.960] +++ [0623 13:55:15] Building go targets for linux/amd64:
I0623 13:55:15.961]     ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
I0623 13:55:18.100] Generating deepcopy code for 229 targets
I0623 13:55:25.124] +++ [0623 13:55:25] Building go targets for linux/amd64:
I0623 13:55:25.124]     ./vendor/k8s.io/code-generator/cmd/defaulter-gen
I0623 13:55:26.475] Generating defaulter code for 91 targets
W0623 13:55:31.225] E0623 13:55:31.225733    6143 ssh.go:116] failed to run SSH command: out: ssh: connect to host 34.145.49.26 port 22: Connection refused

W0623 13:55:31.226] , err: exit status 255
I0623 13:55:36.078] +++ [0623 13:55:36] Building go targets for linux/amd64:
I0623 13:55:36.078]     ./vendor/k8s.io/code-generator/cmd/conversion-gen
I0623 13:55:37.692] Generating conversion code for 125 targets
W0623 13:55:51.583] I0623 13:55:51.582686    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e docker -e containerd -e crio']
I0623 13:55:57.523] +++ [0623 13:55:57] Building go targets for linux/amd64:
I0623 13:55:57.524]     ./vendor/k8s.io/kube-openapi/cmd/openapi-gen
I0623 13:56:04.096] Generating openapi code for KUBE
I0623 13:56:09.245] Generating openapi code for AGGREGATOR
I0623 13:56:10.674] Generating openapi code for APIEXTENSIONS
I0623 13:56:12.328] Generating openapi code for CODEGEN
... skipping 5 lines ...
I0623 13:56:17.485]     cmd/kubelet
I0623 13:56:17.485]     test/e2e_node/e2e_node.test
I0623 13:56:17.485]     vendor/github.com/onsi/ginkgo/ginkgo
I0623 13:56:17.485]     cluster/gce/gci/mounter
I0623 14:02:17.129] make: Leaving directory '/go/src/k8s.io/kubernetes'
W0623 14:02:32.119] I0623 14:02:32.119349    6143 remote.go:71] Staging test binaries on "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8"
W0623 14:02:32.120] I0623 14:02:32.119465    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- mkdir /tmp/node-e2e-20210623T140232]
W0623 14:02:33.107] I0623 14:02:33.106730    6143 ssh.go:113] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine /go/src/k8s.io/kubernetes/e2e_node_test.tar.gz core@34.145.49.26:/tmp/node-e2e-20210623T140232/]
W0623 14:02:35.157] I0623 14:02:35.157160    6143 remote.go:98] Extracting tar on "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8"
W0623 14:02:35.158] I0623 14:02:35.157741    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sh -c 'cd /tmp/node-e2e-20210623T140232 && tar -xzvf ./e2e_node_test.tar.gz']
W0623 14:02:38.297] I0623 14:02:38.297118    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- mkdir /tmp/node-e2e-20210623T140232/results]
W0623 14:02:39.003] I0623 14:02:39.003489    6143 remote.go:113] Running test on "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8"
W0623 14:02:39.004] I0623 14:02:39.003533    6143 utils.go:54] Install CNI on "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8"
W0623 14:02:39.004] I0623 14:02:39.003558    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20210623T140232/cni/bin ; curl -s -L https://storage.googleapis.com/k8s-artifacts-cni/release/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz | tar -xz -C /tmp/node-e2e-20210623T140232/cni/bin']
W0623 14:02:40.781] I0623 14:02:40.781342    6143 utils.go:67] Adding CNI configuration on "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8"
W0623 14:02:40.782] I0623 14:02:40.781409    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20210623T140232/cni/net.d ; echo '"'"'{
W0623 14:02:40.782]   "name": "mynet",
W0623 14:02:40.783]   "type": "bridge",
W0623 14:02:40.783]   "bridge": "mynet0",
W0623 14:02:40.783]   "isDefaultGateway": true,
W0623 14:02:40.783]   "forceAddress": false,
W0623 14:02:40.783]   "ipMasq": true,
... skipping 2 lines ...
W0623 14:02:40.784]     "type": "host-local",
W0623 14:02:40.784]     "subnet": "10.10.0.0/16"
W0623 14:02:40.784]   }
W0623 14:02:40.784] }
W0623 14:02:40.784] '"'"' > /tmp/node-e2e-20210623T140232/cni/net.d/mynet.conf']
W0623 14:02:41.432] I0623 14:02:41.432443    6143 utils.go:81] Configure iptables firewall rules on "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8"
W0623 14:02:41.433] I0623 14:02:41.432526    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo sh -c 'iptables -I INPUT 1 -w -p tcp -j ACCEPT&&iptables -I INPUT 1 -w -p udp -j ACCEPT&&iptables -I INPUT 1 -w -p icmp -j ACCEPT&&iptables -I FORWARD 1 -w -p tcp -j ACCEPT&&iptables -I FORWARD 1 -w -p udp -j ACCEPT&&iptables -I FORWARD 1 -w -p icmp -j ACCEPT']
W0623 14:02:42.061] I0623 14:02:42.060695    6143 utils.go:102] Killing any existing node processes on "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8"
W0623 14:02:42.061] I0623 14:02:42.060775    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo sh -c 'pkill kubelet ; pkill kube-apiserver ; pkill etcd ; pkill e2e_node.test']
W0623 14:02:42.743] E0623 14:02:42.743128    6143 ssh.go:116] failed to run SSH command: out: , err: exit status 1
W0623 14:02:42.744] I0623 14:02:42.743228    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo cat /etc/os-release]
W0623 14:02:43.422] I0623 14:02:43.421342    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo sh -c '/usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20210623T140232/kubelet && /usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20210623T140232/e2e_node.test && /usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20210623T140232/ginkgo && /usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20210623T140232/mounter && /usr/bin/chcon -R -u system_u -r object_r -t bin_t /tmp/node-e2e-20210623T140232/cni/bin']
W0623 14:02:44.125] I0623 14:02:44.124848    6143 node_e2e.go:183] Starting tests on "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8"
W0623 14:02:44.126] I0623 14:02:44.124951    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo sh -c 'cd /tmp/node-e2e-20210623T140232 && timeout -k 30s 25200.000000s ./ginkgo --nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 --report-dir=/tmp/node-e2e-20210623T140232/results --report-prefix=fedora --image-description="fedora-coreos-34-20210529-3-0-gcp-x86-64" --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"']
W0623 16:08:26.620] E0623 16:08:26.611157    6143 ssh.go:116] failed to run SSH command: out: W0623 14:02:44.854843    2494 test_context.go:455] Unable to find in-cluster config, using default host : https://127.0.0.1:6443
W0623 16:08:26.623] I0623 14:02:44.855113    2494 test_context.go:472] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
W0623 16:08:26.624] Jun 23 14:02:44.855: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
W0623 16:08:26.624] STEP: Enabling support for Kubelet Plugins Watcher
W0623 16:08:26.624] I0623 14:02:44.967152    2494 mount_linux.go:206] Detected OS with systemd
W0623 16:08:26.624] I0623 14:02:44.981963    2494 mount_linux.go:206] Detected OS with systemd
W0623 16:08:26.624] Running Suite: E2eNode Suite
... skipping 49 lines ...
W0623 16:08:26.632] I0623 14:02:45.192619    2494 image_list.go:166] Pre-pulling images with CRI [docker.io/nfvpe/sriov-device-plugin:v3.1 gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/e2e-test-images/agnhost:2.32 k8s.gcr.io/e2e-test-images/busybox:1.29-1 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 k8s.gcr.io/e2e-test-images/ipc-utils:1.2 k8s.gcr.io/e2e-test-images/nginx:1.14-1 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1 k8s.gcr.io/e2e-test-images/nonewprivs:1.3 k8s.gcr.io/e2e-test-images/nonroot:1.1 k8s.gcr.io/e2e-test-images/perl:5.26 k8s.gcr.io/e2e-test-images/volume/gluster:1.2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2 k8s.gcr.io/node-problem-detector:v0.6.2 k8s.gcr.io/pause:3.5 k8s.gcr.io/stress:v1]
W0623 16:08:26.632] I0623 14:05:50.204059    2494 e2e_node_suite_test.go:261] Locksmithd is masked successfully
W0623 16:08:26.633] I0623 14:05:50.204155    2494 server.go:102] Starting server "services" with command "/tmp/node-e2e-20210623T140232/e2e_node.test --run-services-mode --bearer-token=vaZqjuIJ2oF4zLZM --test.timeout=24h0m0s --ginkgo.seed=1624456964 --ginkgo.focus=\\[Serial\\] --ginkgo.skip=\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeAlphaFeature:.+\\] --ginkgo.slowSpecThreshold=5.00000 --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 --report-dir=/tmp/node-e2e-20210623T140232/results --report-prefix=fedora --image-description=fedora-coreos-34-20210529-3-0-gcp-x86-64 --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags=--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0 --extra-log={\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"
W0623 16:08:26.634] I0623 14:05:50.204195    2494 util.go:48] Running readiness check for service "services"
W0623 16:08:26.634] I0623 14:05:50.205216    2494 server.go:130] Output file for server "services": /tmp/node-e2e-20210623T140232/results/services.log
W0623 16:08:26.634] I0623 14:05:50.206188    2494 server.go:160] Waiting for server "services" start command to complete
W0623 16:08:26.634] W0623 14:05:55.407232    2494 util.go:106] Health check on "https://127.0.0.1:6443/healthz" failed, status=500
W0623 16:08:26.634] I0623 14:05:56.409449    2494 services.go:70] Node services started.
W0623 16:08:26.634] I0623 14:05:56.409468    2494 kubelet.go:100] Starting kubelet
W0623 16:08:26.635] I0623 14:05:56.409608    2494 feature_gate.go:243] feature gates: &{map[DynamicKubeletConfig:true LocalStorageCapacityIsolation:true]}
W0623 16:08:26.635] I0623 14:05:56.413341    2494 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run -p Delegate=true --unit=kubelet-20210623T140232.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20210623T140232/kubelet --kubeconfig /tmp/node-e2e-20210623T140232/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20210623T140232/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20210623T140232/cni/bin --cni-conf-dir /tmp/node-e2e-20210623T140232/cni/net.d --cni-cache-dir /tmp/node-e2e-20210623T140232/cni/cache --hostname-override n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 --container-runtime remote --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20210623T140232/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0"
W0623 16:08:26.636] I0623 14:05:56.413552    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:26.636] I0623 14:05:56.413726    2494 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20210623T140232/results/kubelet.log
W0623 16:08:26.636] I0623 14:05:56.414369    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:26.636] I0623 14:05:56.414388    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:26.636] W0623 14:05:57.414593    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.637] W0623 14:05:57.414655    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.637] W0623 14:05:58.415257    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.637] W0623 14:05:58.415321    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.637] W0623 14:05:59.415814    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.638] W0623 14:05:59.415871    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.638] W0623 14:06:00.416258    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.638] W0623 14:06:00.416309    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.638] W0623 14:06:01.417534    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.639] W0623 14:06:01.417585    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.639] W0623 14:06:02.418524    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.639] W0623 14:06:02.418578    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.639] W0623 14:06:03.419930    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.639] W0623 14:06:03.420691    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:26.640] I0623 14:06:04.421806    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:26.640] I0623 14:06:04.422567    2494 services.go:80] Kubelet started.
W0623 16:08:26.640] I0623 14:06:04.422591    2494 e2e_node_suite_test.go:207] Wait for the node to be ready
W0623 16:08:26.640] Jun 23 14:06:14.474: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
W0623 16:08:26.640] [sig-node] Density [Serial] [Slow] create a sequence of pods 
W0623 16:08:26.640]   latency/resource should be within limit when create 10 pods with 50 background pods
... skipping 163 lines ...
W0623 16:08:26.663]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
W0623 16:08:26.663] STEP: Collecting events from namespace "density-test-4812".
W0623 16:08:26.663] STEP: Found 4 events.
W0623 16:08:26.663] Jun 23 14:11:14.600: INFO: At 2021-06-23 14:06:15 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "google/cadvisor:latest" already present on machine
W0623 16:08:26.664] Jun 23 14:11:14.600: INFO: At 2021-06-23 14:06:15 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container cadvisor
W0623 16:08:26.664] Jun 23 14:11:14.600: INFO: At 2021-06-23 14:06:15 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container cadvisor
W0623 16:08:26.664] Jun 23 14:11:14.600: INFO: At 2021-06-23 14:06:17 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} BackOff: Back-off restarting failed container
W0623 16:08:26.664] Jun 23 14:11:14.607: INFO: POD       NODE                                                             PHASE    GRACE  CONDITIONS
W0623 16:08:26.665] Jun 23 14:11:14.608: INFO: cadvisor  n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:06:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:06:14 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:06:14 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:06:14 +0000 UTC  }]
W0623 16:08:26.665] Jun 23 14:11:14.608: INFO: 
W0623 16:08:26.665] Jun 23 14:11:14.610: INFO: 
W0623 16:08:26.665] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.672] Jun 23 14:11:14.612: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 100 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2021-06-23 14:06:14 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18833769646 0} {<nil>} 18833769646 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 14:06:14 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 14:06:14 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 14:06:14 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-23 14:06:14 +0000 UTC,LastTransitionTime:2021-06-23 14:06:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
W0623 16:08:26.672] Jun 23 14:11:14.613: INFO: 
W0623 16:08:26.673] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.673] Jun 23 14:11:14.614: INFO: 
W0623 16:08:26.673] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.673] Jun 23 14:11:14.628: INFO: cadvisor started at 2021-06-23 14:06:14 +0000 UTC (0+1 container statuses recorded)
W0623 16:08:26.673] Jun 23 14:11:14.628: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 12 lines ...
W0623 16:08:26.676] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
W0623 16:08:26.676]   create a sequence of pods [BeforeEach]
W0623 16:08:26.676]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:223
W0623 16:08:26.676]     latency/resource should be within limit when create 10 pods with 50 background pods
W0623 16:08:26.676]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:247
W0623 16:08:26.676] 
W0623 16:08:26.676]     Unexpected error:
W0623 16:08:26.676]         <*errors.errorString | 0xc00027ac30>: {
W0623 16:08:26.677]             s: "timed out waiting for the condition",
W0623 16:08:26.677]         }
W0623 16:08:26.677]         timed out waiting for the condition
W0623 16:08:26.677]     occurred
W0623 16:08:26.677] 
... skipping 47 lines ...
W0623 16:08:26.683] STEP: Collecting events from namespace "pidpressure-eviction-test-4274".
W0623 16:08:26.683] STEP: Found 0 events.
W0623 16:08:26.683] Jun 23 14:11:34.931: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
W0623 16:08:26.683] Jun 23 14:11:34.931: INFO: 
W0623 16:08:26.683] Jun 23 14:11:34.945: INFO: 
W0623 16:08:26.684] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.692] Jun 23 14:11:34.962: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 234 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 14:11:14 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 14:11:23 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 14:11:34 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 14:11:34 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 14:11:34 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 14:11:34 +0000 UTC,LastTransitionTime:2021-06-23 14:11:23 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:7ac3d030-3581-446b-95c1-f977a647951e,ResourceVersion:222,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:7ac3d030-3581-446b-95c1-f977a647951e,ResourceVersion:222,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:26.692] Jun 23 14:11:34.962: INFO: 
W0623 16:08:26.692] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.692] Jun 23 14:11:34.966: INFO: 
W0623 16:08:26.692] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.693] W0623 14:11:34.978216    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
W0623 16:08:26.693] W0623 14:11:34.978379    2494 metrics_grabber.go:107] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled.
... skipping 18 lines ...
W0623 16:08:26.696]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:407
W0623 16:08:26.696]     
W0623 16:08:26.696]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
W0623 16:08:26.696]       should eventually evict all of the correct pods [BeforeEach]
W0623 16:08:26.696]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
W0623 16:08:26.696] 
W0623 16:08:26.696]       Unexpected error:
W0623 16:08:26.696]           <*exec.ExitError | 0xc00133b2e0>: {
W0623 16:08:26.697]               ProcessState: {
W0623 16:08:26.697]                   pid: 4443,
W0623 16:08:26.697]                   status: 256,
W0623 16:08:26.697]                   rusage: {
W0623 16:08:26.697]                       Utime: {Sec: 0, Usec: 27605},
... skipping 63 lines ...
W0623 16:08:26.704] Jun 23 14:11:45.016: INFO: Skipping waiting for service account
W0623 16:08:26.704] [BeforeEach] Downward API tests for local ephemeral storage
W0623 16:08:26.704]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:37
W0623 16:08:26.704] [It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
W0623 16:08:26.704]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:41
W0623 16:08:26.704] STEP: Creating a pod to test downward api env vars
W0623 16:08:26.704] Jun 23 14:11:45.031: INFO: Waiting up to 5m0s for pod "downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2" in namespace "downward-api-2737" to be "Succeeded or Failed"
W0623 16:08:26.705] Jun 23 14:11:45.037: INFO: Pod "downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03645ms
W0623 16:08:26.705] Jun 23 14:11:47.040: INFO: Pod "downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008837562s
W0623 16:08:26.705] Jun 23 14:11:49.044: INFO: Pod "downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012211054s
W0623 16:08:26.705] STEP: Saw pod success
W0623 16:08:26.705] Jun 23 14:11:49.044: INFO: Pod "downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2" satisfied condition "Succeeded or Failed"
W0623 16:08:26.705] Jun 23 14:11:49.046: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 pod downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2 container dapi-container: <nil>
W0623 16:08:26.705] STEP: delete the pod
W0623 16:08:26.706] Jun 23 14:11:49.061: INFO: Waiting for pod downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2 to disappear
W0623 16:08:26.706] Jun 23 14:11:49.063: INFO: Pod downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2 no longer exists
W0623 16:08:26.706] [AfterEach] [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage]
W0623 16:08:26.706]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 221 lines ...
W0623 16:08:26.733]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
W0623 16:08:26.733] STEP: Collecting events from namespace "density-test-3986".
W0623 16:08:26.733] STEP: Found 4 events.
W0623 16:08:26.733] Jun 23 14:16:49.134: INFO: At 2021-06-23 14:11:49 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "google/cadvisor:latest" already present on machine
W0623 16:08:26.733] Jun 23 14:16:49.134: INFO: At 2021-06-23 14:11:49 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container cadvisor
W0623 16:08:26.734] Jun 23 14:16:49.134: INFO: At 2021-06-23 14:11:49 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container cadvisor
W0623 16:08:26.734] Jun 23 14:16:49.134: INFO: At 2021-06-23 14:11:51 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} BackOff: Back-off restarting failed container
W0623 16:08:26.734] Jun 23 14:16:49.136: INFO: POD       NODE                                                             PHASE    GRACE  CONDITIONS
W0623 16:08:26.735] Jun 23 14:16:49.136: INFO: cadvisor  n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:11:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:12:38 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:12:38 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:11:49 +0000 UTC  }]
W0623 16:08:26.735] Jun 23 14:16:49.136: INFO: 
W0623 16:08:26.735] Jun 23 14:16:49.138: INFO: 
W0623 16:08:26.735] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.742] Jun 23 14:16:49.139: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 370 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 14:11:14 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 14:11:23 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18833769646 0} {<nil>} 18833769646 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 14:16:45 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 14:16:45 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 14:16:45 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-23 14:16:45 +0000 UTC,LastTransitionTime:2021-06-23 14:11:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:7ac3d030-3581-446b-95c1-f977a647951e,ResourceVersion:222,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:7ac3d030-3581-446b-95c1-f977a647951e,ResourceVersion:222,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:26.743] Jun 23 14:16:49.139: INFO: 
W0623 16:08:26.743] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.743] Jun 23 14:16:49.141: INFO: 
W0623 16:08:26.743] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.743] Jun 23 14:16:49.149: INFO: cadvisor started at 2021-06-23 14:11:49 +0000 UTC (0+1 container statuses recorded)
W0623 16:08:26.743] Jun 23 14:16:49.149: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 12 lines ...
W0623 16:08:26.745] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
W0623 16:08:26.745]   create a batch of pods [BeforeEach]
W0623 16:08:26.745]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:74
W0623 16:08:26.745]     latency/resource should be within limit when create 10 pods with 0s interval
W0623 16:08:26.745]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:103
W0623 16:08:26.746] 
W0623 16:08:26.746]     Unexpected error:
W0623 16:08:26.746]         <*errors.errorString | 0xc00027ac30>: {
W0623 16:08:26.746]             s: "timed out waiting for the condition",
W0623 16:08:26.746]         }
W0623 16:08:26.746]         timed out waiting for the condition
W0623 16:08:26.746]     occurred
W0623 16:08:26.746] 
... skipping 11 lines ...
W0623 16:08:26.748] Jun 23 14:16:49.184: INFO: Skipping waiting for service account
W0623 16:08:26.748] [BeforeEach] Downward API tests for local ephemeral storage
W0623 16:08:26.748]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:37
W0623 16:08:26.748] [It] should provide default limits.ephemeral-storage from node allocatable
W0623 16:08:26.748]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:69
W0623 16:08:26.748] STEP: Creating a pod to test downward api env vars
W0623 16:08:26.748] Jun 23 14:16:49.192: INFO: Waiting up to 5m0s for pod "downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79" in namespace "downward-api-7421" to be "Succeeded or Failed"
W0623 16:08:26.749] Jun 23 14:16:49.195: INFO: Pod "downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79": Phase="Pending", Reason="", readiness=false. Elapsed: 3.0346ms
W0623 16:08:26.749] Jun 23 14:16:51.203: INFO: Pod "downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010547117s
W0623 16:08:26.749] Jun 23 14:16:53.206: INFO: Pod "downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01371454s
W0623 16:08:26.749] STEP: Saw pod success
W0623 16:08:26.749] Jun 23 14:16:53.206: INFO: Pod "downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79" satisfied condition "Succeeded or Failed"
W0623 16:08:26.749] Jun 23 14:16:53.208: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 pod downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79 container dapi-container: <nil>
W0623 16:08:26.749] STEP: delete the pod
W0623 16:08:26.750] Jun 23 14:16:53.223: INFO: Waiting for pod downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79 to disappear
W0623 16:08:26.750] Jun 23 14:16:53.224: INFO: Pod downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79 no longer exists
W0623 16:08:26.750] [AfterEach] [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage]
W0623 16:08:26.750]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 108 lines ...
W0623 16:08:26.765] STEP: Creating a kubernetes client
W0623 16:08:26.766] STEP: Building a namespace api object, basename device-plugin-gpus-errors
W0623 16:08:26.766] Jun 23 14:41:55.021: INFO: Skipping waiting for service account
W0623 16:08:26.766] [BeforeEach] DevicePlugin
W0623 16:08:26.766]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:74
W0623 16:08:26.766] STEP: Ensuring that Nvidia GPUs exists on the node
W0623 16:08:26.766] Jun 23 14:41:55.032: INFO: check for nvidia GPUs failed. Got Error: exit status 1
W0623 16:08:26.766] [AfterEach] DevicePlugin
W0623 16:08:26.766]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:94
W0623 16:08:26.766] [AfterEach] [sig-node] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive]
W0623 16:08:26.767]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
W0623 16:08:26.767] Jun 23 14:41:55.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0623 16:08:26.767] STEP: Destroying namespace "device-plugin-gpus-errors-8372" for this suite.
... skipping 224 lines ...
W0623 16:08:26.828] Jun 23 14:42:09.272: INFO: At 2021-06-23 14:42:01 +0000 UTC - event for guaranteed: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Killing: Stopping container container
W0623 16:08:26.828] Jun 23 14:42:09.272: INFO: At 2021-06-23 14:42:07 +0000 UTC - event for best-effort: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Killing: Stopping container container
W0623 16:08:26.828] Jun 23 14:42:09.276: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
W0623 16:08:26.828] Jun 23 14:42:09.276: INFO: 
W0623 16:08:26.828] Jun 23 14:42:09.290: INFO: 
W0623 16:08:26.828] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.837] Jun 23 14:42:09.300: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 850 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 14:41:31 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 14:41:42 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-9nthb,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 14:42:06 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 14:42:06 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 14:42:06 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 14:42:06 +0000 UTC,LastTransitionTime:2021-06-23 14:42:06 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-9nthb,UID:2458efd4-98db-493b-95d8-a28bfb7a21a5,ResourceVersion:823,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-9nthb,UID:2458efd4-98db-493b-95d8-a28bfb7a21a5,ResourceVersion:823,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:26.837] Jun 23 14:42:09.301: INFO: 
W0623 16:08:26.837] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.837] Jun 23 14:42:09.309: INFO: 
W0623 16:08:26.837] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.838] Jun 23 14:42:09.352: INFO: static-critical-pod started at 2021-06-23 14:42:01 +0000 UTC (0+1 container statuses recorded)
W0623 16:08:26.838] Jun 23 14:42:09.352: INFO: 	Container container ready: true, restart count 0
... skipping 15 lines ...
W0623 16:08:26.841] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
W0623 16:08:26.841]   when we need to admit a critical pod
W0623 16:08:26.842]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:46
W0623 16:08:26.842]     should be able to create and delete a critical pod [It]
W0623 16:08:26.842]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:53
W0623 16:08:26.842] 
W0623 16:08:26.842]     Unexpected error:
W0623 16:08:26.842]         <*errors.StatusError | 0xc0004d66e0>: {
W0623 16:08:26.843]             ErrStatus: {
W0623 16:08:26.843]                 TypeMeta: {Kind: "", APIVersion: ""},
W0623 16:08:26.843]                 ListMeta: {
W0623 16:08:26.843]                     SelfLink: "",
W0623 16:08:26.843]                     ResourceVersion: "",
... skipping 124 lines ...
W0623 16:08:26.862] I0623 14:56:01.388249    2494 util.go:247] new configuration has taken effect
W0623 16:08:26.862] STEP: Found 0 events.
W0623 16:08:26.862] Jun 23 14:56:01.391: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
W0623 16:08:26.862] Jun 23 14:56:01.391: INFO: 
W0623 16:08:26.862] Jun 23 14:56:01.393: INFO: 
W0623 16:08:26.862] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.870] Jun 23 14:56:01.395: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 1160 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 14:55:14 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 14:55:25 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-hsmwz,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 14:56:01 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 14:56:01 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 14:56:01 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 14:56:01 +0000 UTC,LastTransitionTime:2021-06-23 14:56:01 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-hsmwz,UID:fa535109-4152-4edf-a689-563de7b21bde,ResourceVersion:1146,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-hsmwz,UID:fa535109-4152-4edf-a689-563de7b21bde,ResourceVersion:1146,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:26.870] Jun 23 14:56:01.395: INFO: 
W0623 16:08:26.870] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.870] Jun 23 14:56:01.397: INFO: 
W0623 16:08:26.870] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.871] W0623 14:56:01.403607    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
W0623 16:08:26.871] W0623 14:56:01.403624    2494 metrics_grabber.go:107] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled.
... skipping 17 lines ...
W0623 16:08:26.874]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:140
W0623 16:08:26.874]     
W0623 16:08:26.874]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
W0623 16:08:26.874]       should eventually evict all of the correct pods [BeforeEach]
W0623 16:08:26.874]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
W0623 16:08:26.874] 
W0623 16:08:26.874]       Unexpected error:
W0623 16:08:26.874]           <*exec.ExitError | 0xc00027dc60>: {
W0623 16:08:26.874]               ProcessState: {
W0623 16:08:26.874]                   pid: 11576,
W0623 16:08:26.875]                   status: 256,
W0623 16:08:26.875]                   rusage: {
W0623 16:08:26.875]                       Utime: {Sec: 0, Usec: 32377},
... skipping 195 lines ...
W0623 16:08:26.898] I0623 15:09:19.773828    2494 util.go:247] new configuration has taken effect
W0623 16:08:26.899] STEP: Found 0 events.
W0623 16:08:26.899] Jun 23 15:09:19.777: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
W0623 16:08:26.899] Jun 23 15:09:19.777: INFO: 
W0623 16:08:26.899] Jun 23 15:09:19.779: INFO: 
W0623 16:08:26.899] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.906] Jun 23 15:09:19.781: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 1444 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:08:40 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:08:53 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ghlk6,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:09:17 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:09:17 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:09:17 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:09:17 +0000 UTC,LastTransitionTime:2021-06-23 15:09:17 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ghlk6,UID:e3fa0682-b24e-4bf9-bc07-8405021bde4f,ResourceVersion:1429,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ghlk6,UID:e3fa0682-b24e-4bf9-bc07-8405021bde4f,ResourceVersion:1429,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:26.906] Jun 23 15:09:19.781: INFO: 
W0623 16:08:26.907] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.907] Jun 23 15:09:19.783: INFO: 
W0623 16:08:26.907] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.907] W0623 15:09:19.789368    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
W0623 16:08:26.907] W0623 15:09:19.789399    2494 metrics_grabber.go:107] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled.
... skipping 16 lines ...
W0623 16:08:26.910]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:236
W0623 16:08:26.910]     
W0623 16:08:26.910]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
W0623 16:08:26.911]       should eventually evict all of the correct pods [BeforeEach]
W0623 16:08:26.911]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
W0623 16:08:26.911] 
W0623 16:08:26.911]       Unexpected error:
W0623 16:08:26.911]           <*exec.ExitError | 0xc000a3ce20>: {
W0623 16:08:26.911]               ProcessState: {
W0623 16:08:26.911]                   pid: 13746,
W0623 16:08:26.911]                   status: 256,
W0623 16:08:26.911]                   rusage: {
W0623 16:08:26.912]                       Utime: {Sec: 0, Usec: 29520},
... skipping 67 lines ...
W0623 16:08:26.919] [AfterEach] [sig-node] Container Manager Misc [Serial]
W0623 16:08:26.919]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
W0623 16:08:26.919] Jun 23 15:09:27.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0623 16:08:26.919] STEP: Destroying namespace "kubelet-container-manager-5977" for this suite.
W0623 16:08:26.920] •SSSSSSSSSSSSS
W0623 16:08:26.920] ------------------------------
W0623 16:08:26.920] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  delete and recreate ConfigMap: error while ConfigMap is absent: 
W0623 16:08:26.920]   status and events should match expectations
W0623 16:08:26.920]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:784
W0623 16:08:26.920] [BeforeEach] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]
W0623 16:08:26.920]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
W0623 16:08:26.921] STEP: Creating a kubernetes client
W0623 16:08:26.921] STEP: Building a namespace api object, basename dynamic-kubelet-configuration-test
... skipping 36 lines ...
W0623 16:08:26.926] 
W0623 16:08:26.926] • [SLOW TEST:71.601 seconds]
W0623 16:08:26.926] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]
W0623 16:08:26.926] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
W0623 16:08:26.926]   
W0623 16:08:26.926]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:81
W0623 16:08:26.926]     delete and recreate ConfigMap: error while ConfigMap is absent:
W0623 16:08:26.927]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:783
W0623 16:08:26.927]       status and events should match expectations
W0623 16:08:26.927]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:784
W0623 16:08:26.927] ------------------------------
W0623 16:08:26.927] [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing 
W0623 16:08:26.927]   should complete pod sandbox clean up based on the information in sandbox checkpoint
... skipping 194 lines ...
W0623 16:08:26.951]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
W0623 16:08:26.951] STEP: Collecting events from namespace "resource-usage-8124".
W0623 16:08:26.951] STEP: Found 4 events.
W0623 16:08:26.951] Jun 23 15:15:39.615: INFO: At 2021-06-23 15:10:40 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "google/cadvisor:latest" already present on machine
W0623 16:08:26.952] Jun 23 15:15:39.615: INFO: At 2021-06-23 15:10:40 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container cadvisor
W0623 16:08:26.952] Jun 23 15:15:39.615: INFO: At 2021-06-23 15:10:40 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container cadvisor
W0623 16:08:26.952] Jun 23 15:15:39.615: INFO: At 2021-06-23 15:10:41 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} BackOff: Back-off restarting failed container
W0623 16:08:26.952] Jun 23 15:15:39.617: INFO: POD       NODE                                                             PHASE    GRACE  CONDITIONS
W0623 16:08:26.953] Jun 23 15:15:39.617: INFO: cadvisor  n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:10:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:10:39 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:10:39 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:10:39 +0000 UTC  }]
W0623 16:08:26.953] Jun 23 15:15:39.617: INFO: 
W0623 16:08:26.953] Jun 23 15:15:39.619: INFO: 
W0623 16:08:26.953] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.960] Jun 23 15:15:39.620: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 1546 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:10:16 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:10:28 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qmcgm,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18833769646 0} {<nil>} 18833769646 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:10:39 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:10:39 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:10:39 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-23 15:10:39 +0000 UTC,LastTransitionTime:2021-06-23 15:10:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qmcgm,UID:0a294f53-6162-42e6-9112-17b7f7430e32,ResourceVersion:410,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qmcgm,UID:0a294f53-6162-42e6-9112-17b7f7430e32,ResourceVersion:410,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:26.961] Jun 23 15:15:39.621: INFO: 
W0623 16:08:26.961] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.961] Jun 23 15:15:39.622: INFO: 
W0623 16:08:26.961] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.961] Jun 23 15:15:39.634: INFO: cadvisor started at 2021-06-23 15:10:39 +0000 UTC (0+1 container statuses recorded)
W0623 16:08:26.961] Jun 23 15:15:39.634: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 10 lines ...
W0623 16:08:26.963]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:59
W0623 16:08:26.963] W0623 15:15:39.663099    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
W0623 16:08:26.963] W0623 15:15:39.663273    2494 metrics_grabber.go:107] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled.
W0623 16:08:26.963] W0623 15:15:39.663347    2494 metrics_grabber.go:111] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled.
W0623 16:08:26.963] W0623 15:15:39.663419    2494 metrics_grabber.go:115] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
W0623 16:08:26.964] W0623 15:15:39.663475    2494 metrics_grabber.go:118] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
W0623 16:08:26.964] Jun 23 15:15:39.681: INFO: runtime operation error metrics:
W0623 16:08:26.964] node "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8" runtime operation error rate:
W0623 16:08:26.964] 
W0623 16:08:26.964] 
W0623 16:08:26.964] 
W0623 16:08:26.964] • Failure in Spec Setup (BeforeEach) [300.110 seconds]
W0623 16:08:26.964] [sig-node] Resource-usage [Serial] [Slow]
W0623 16:08:26.964] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
W0623 16:08:26.964]   regular resource usage tracking [BeforeEach]
W0623 16:08:26.965]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:67
W0623 16:08:26.965]     resource tracking for 10 pods per node
W0623 16:08:26.965]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:85
W0623 16:08:26.965] 
W0623 16:08:26.965]     Unexpected error:
W0623 16:08:26.965]         <*errors.errorString | 0xc00027ac30>: {
W0623 16:08:26.965]             s: "timed out waiting for the condition",
W0623 16:08:26.965]         }
W0623 16:08:26.965]         timed out waiting for the condition
W0623 16:08:26.965]     occurred
W0623 16:08:26.965] 
... skipping 47 lines ...
W0623 16:08:26.972] I0623 15:16:05.042395    2494 util.go:247] new configuration has taken effect
W0623 16:08:26.972] STEP: Found 0 events.
W0623 16:08:26.972] Jun 23 15:16:05.047: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
W0623 16:08:26.972] Jun 23 15:16:05.047: INFO: 
W0623 16:08:26.972] Jun 23 15:16:05.049: INFO: 
W0623 16:08:26.973] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.980] Jun 23 15:16:05.051: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 1703 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:10:16 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:10:28 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gs4xr,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:16:00 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:16:00 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:16:00 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:16:00 +0000 UTC,LastTransitionTime:2021-06-23 15:16:00 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gs4xr,UID:87904b10-eb33-49f7-b4c5-cc40598d20c9,ResourceVersion:1690,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gs4xr,UID:87904b10-eb33-49f7-b4c5-cc40598d20c9,ResourceVersion:1690,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:26.980] Jun 23 15:16:05.051: INFO: 
W0623 16:08:26.980] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.980] Jun 23 15:16:05.053: INFO: 
W0623 16:08:26.981] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:26.981] Jun 23 15:16:05.056: INFO: cadvisor started at 2021-06-23 15:10:39 +0000 UTC (0+1 container statuses recorded)
W0623 16:08:26.981] Jun 23 15:16:05.056: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 17 lines ...
W0623 16:08:26.984]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/quota_lsci_test.go:57
W0623 16:08:26.984]     
W0623 16:08:26.984]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
W0623 16:08:26.984]       should eventually evict all of the correct pods [BeforeEach]
W0623 16:08:26.984]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
W0623 16:08:26.984] 
W0623 16:08:26.984]       Unexpected error:
W0623 16:08:26.984]           <*exec.ExitError | 0xc000a077c0>: {
W0623 16:08:26.984]               ProcessState: {
W0623 16:08:26.984]                   pid: 15683,
W0623 16:08:26.984]                   status: 256,
W0623 16:08:26.985]                   rusage: {
W0623 16:08:26.985]                       Utime: {Sec: 0, Usec: 26018},
... skipping 35 lines ...
W0623 16:08:26.988]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
W0623 16:08:26.989] Jun 23 15:16:11.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0623 16:08:26.989] STEP: Destroying namespace "kubelet-container-manager-5122" for this suite.
W0623 16:08:26.989] •
W0623 16:08:26.989] ------------------------------
W0623 16:08:26.989] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system 
W0623 16:08:26.989]   should return the expected error with the feature gate disabled
W0623 16:08:26.989]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:634
W0623 16:08:26.989] [BeforeEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
W0623 16:08:26.990]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
W0623 16:08:26.990] STEP: Creating a kubernetes client
W0623 16:08:26.990] STEP: Building a namespace api object, basename podresources-test
W0623 16:08:26.990] Jun 23 15:16:11.153: INFO: Skipping waiting for service account
W0623 16:08:26.990] [It] should return the expected error with the feature gate disabled
W0623 16:08:26.990]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:634
W0623 16:08:26.990] STEP: checking GetAllocatableResources fail if the feature gate is not enabled
W0623 16:08:26.990] [AfterEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
W0623 16:08:26.991]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
W0623 16:08:26.991] Jun 23 15:16:11.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0623 16:08:26.991] STEP: Destroying namespace "podresources-test-8087" for this suite.
W0623 16:08:26.991] •SSSSSSS
W0623 16:08:26.991] ------------------------------
... skipping 642 lines ...
W0623 16:08:27.077] I0623 15:37:04.069659    2494 util.go:247] new configuration has taken effect
W0623 16:08:27.077] STEP: Found 0 events.
W0623 16:08:27.078] Jun 23 15:37:04.073: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
W0623 16:08:27.078] Jun 23 15:37:04.073: INFO: 
W0623 16:08:27.078] Jun 23 15:37:04.075: INFO: 
W0623 16:08:27.078] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.085] Jun 23 15:37:04.076: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 3269 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:36:27 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:36:38 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-7gz2d,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:02 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:02 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:02 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:37:02 +0000 UTC,LastTransitionTime:2021-06-23 15:37:02 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-7gz2d,UID:2c95a9ec-aed1-43f4-be58-d0c0a433ce15,ResourceVersion:3256,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-7gz2d,UID:2c95a9ec-aed1-43f4-be58-d0c0a433ce15,ResourceVersion:3256,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:27.086] Jun 23 15:37:04.077: INFO: 
W0623 16:08:27.086] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.086] Jun 23 15:37:04.078: INFO: 
W0623 16:08:27.086] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.086] Jun 23 15:37:04.081: INFO: cadvisor started at 2021-06-23 15:10:39 +0000 UTC (0+1 container statuses recorded)
W0623 16:08:27.086] Jun 23 15:37:04.081: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 19 lines ...
W0623 16:08:27.090]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:173
W0623 16:08:27.090]     
W0623 16:08:27.090]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
W0623 16:08:27.090]       should eventually evict all of the correct pods [BeforeEach]
W0623 16:08:27.090]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
W0623 16:08:27.090] 
W0623 16:08:27.090]       Unexpected error:
W0623 16:08:27.090]           <*exec.ExitError | 0xc000aa44e0>: {
W0623 16:08:27.090]               ProcessState: {
W0623 16:08:27.091]                   pid: 31246,
W0623 16:08:27.091]                   status: 256,
W0623 16:08:27.091]                   rusage: {
W0623 16:08:27.091]                       Utime: {Sec: 0, Usec: 30482},
... skipping 164 lines ...
W0623 16:08:27.112] STEP: Collecting events from namespace "priority-disk-eviction-ordering-test-1480".
W0623 16:08:27.112] STEP: Found 0 events.
W0623 16:08:27.112] Jun 23 15:37:39.517: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
W0623 16:08:27.112] Jun 23 15:37:39.517: INFO: 
W0623 16:08:27.112] Jun 23 15:37:39.519: INFO: 
W0623 16:08:27.112] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.121] Jun 23 15:37:39.521: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 3321 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:36:27 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:36:38 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-bw5z4,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:37:38 +0000 UTC,LastTransitionTime:2021-06-23 15:37:38 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-bw5z4,UID:e6b60c51-8c82-4256-9108-5d48580ea138,ResourceVersion:3308,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-bw5z4,UID:e6b60c51-8c82-4256-9108-5d48580ea138,ResourceVersion:3308,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:27.122] Jun 23 15:37:39.521: INFO: 
W0623 16:08:27.122] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.122] Jun 23 15:37:39.522: INFO: 
W0623 16:08:27.122] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.122] Jun 23 15:37:39.525: INFO: cadvisor started at 2021-06-23 15:10:39 +0000 UTC (0+1 container statuses recorded)
W0623 16:08:27.122] Jun 23 15:37:39.525: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 19 lines ...
W0623 16:08:27.127]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:351
W0623 16:08:27.127]     
W0623 16:08:27.127]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
W0623 16:08:27.127]       should eventually evict all of the correct pods [BeforeEach]
W0623 16:08:27.127]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
W0623 16:08:27.128] 
W0623 16:08:27.128]       Unexpected error:
W0623 16:08:27.128]           <*exec.ExitError | 0xc000a3d8c0>: {
W0623 16:08:27.128]               ProcessState: {
W0623 16:08:27.128]                   pid: 31562,
W0623 16:08:27.128]                   status: 256,
W0623 16:08:27.128]                   rusage: {
W0623 16:08:27.128]                       Utime: {Sec: 0, Usec: 26703},
... skipping 176 lines ...
W0623 16:08:27.154] STEP: Building a namespace api object, basename topology-manager-test
W0623 16:08:27.154] Jun 23 15:38:27.647: INFO: Skipping waiting for service account
W0623 16:08:27.154] [It] run Topology Manager policy test suite
W0623 16:08:27.154]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:888
W0623 16:08:27.154] STEP: by configuring Topology Manager policy to single-numa-node
W0623 16:08:27.154] Jun 23 15:38:27.653: INFO: Configuring topology Manager policy to single-numa-node
W0623 16:08:27.155] Jun 23 15:38:27.656: INFO: failed to find any VF device from [{0000:00:00.0 -1 false false} {0000:00:01.0 -1 false false} {0000:00:01.3 -1 false false} {0000:00:03.0 -1 false false} {0000:00:04.0 -1 false false} {0000:00:05.0 -1 false false}]
W0623 16:08:27.156] Jun 23 15:38:27.656: INFO: New kubelet config is {{ } %!s(bool=true) /tmp/node-e2e-20210623T140232/static-pods515999671 {1m0s} {10s} {20s}  map[] 0.0.0.0 %!s(int32=10250) %!s(int32=10255) /usr/libexec/kubernetes/kubelet-plugins/volume/exec/  /var/lib/kubelet/pki/kubelet.crt /var/lib/kubelet/pki/kubelet.key []  %!s(bool=false) %!s(bool=false) {{} {%!s(bool=false) {2m0s}} {%!s(bool=true)}} {AlwaysAllow {{5m0s} {30s}}} %!s(int32=5) %!s(int32=10) %!s(int32=5) %!s(int32=10) %!s(bool=true) %!s(bool=false) %!s(int32=10248) 127.0.0.1 %!s(int32=-999)  [] {4h0m0s} {10s} {5m0s} %!s(int32=40) {2m0s} %!s(int32=85) %!s(int32=80) {10s} /system.slice/kubelet.service  / %!s(bool=true) systemd static {1s} None single-numa-node container map[] {2m0s} promiscuous-bridge %!s(int32=110) 10.100.0.0/24 %!s(int64=-1) /etc/resolv.conf %!s(bool=false) %!s(bool=true) {100ms} %!s(int64=1000000) %!s(int32=50) application/vnd.kubernetes.protobuf %!s(int32=5) %!s(int32=10) %!s(bool=false) map[memory.available:250Mi nodefs.available:10% nodefs.inodesFree:5%] map[] map[] {30s} %!s(int32=0) map[nodefs.available:5% nodefs.inodesFree:5%] %!s(int32=0) %!s(bool=true) %!s(bool=false) %!s(bool=true) %!s(int32=14) %!s(int32=15) map[CPUManager:%!s(bool=true) DynamicKubeletConfig:%!s(bool=true) LocalStorageCapacityIsolation:%!s(bool=true) TopologyManager:%!s(bool=true)] %!s(bool=true) 10Mi %!s(int32=5) Watch [] %!s(bool=false) map[] map[cpu:200m]   [pods]   {text %!s(bool=false)} %!s(bool=true) {0s} {0s} [] %!s(bool=true) %!s(bool=true)}
W0623 16:08:27.156] I0623 15:38:32.130456    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.156] I0623 15:38:32.181912    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.156] I0623 15:38:32.181936    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.156] I0623 15:38:32.774657    2494 util.go:247] new configuration has taken effect
W0623 16:08:27.157] STEP: running a non-Gu pod
... skipping 6 lines ...
W0623 16:08:27.157] Jun 23 15:38:36.856: INFO: Waiting for pod non-gu-pod to disappear
W0623 16:08:27.157] Jun 23 15:38:36.860: INFO: Pod non-gu-pod no longer exists
W0623 16:08:27.158] I0623 15:38:36.860930    2494 remote_runtime.go:54] "Connecting to runtime service" endpoint="unix:///var/run/crio/crio.sock"
W0623 16:08:27.158] I0623 15:38:36.861017    2494 remote_image.go:41] "Connecting to image service" endpoint="unix:///var/run/crio/crio.sock"
W0623 16:08:27.158] STEP: running a Gu pod
W0623 16:08:27.158] Jun 23 15:39:32.937: INFO: The status of Pod gu-pod is Pending, waiting for it to be Running (with Ready = true)
W0623 16:08:27.158] Jun 23 15:39:34.941: INFO: The status of Pod gu-pod is Failed which is unexpected
W0623 16:08:27.158] [AfterEach] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests
W0623 16:08:27.158]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:969
W0623 16:08:27.159] I0623 15:39:37.317392    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.159] I0623 15:39:37.369469    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.159] I0623 15:39:37.369494    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.159] I0623 15:39:38.370829    2494 server.go:182] Initial health check passed for service "kubelet"
... skipping 5 lines ...
W0623 16:08:27.160] Jun 23 15:39:40.005: INFO: At 2021-06-23 15:38:35 +0000 UTC - event for non-gu-pod: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
W0623 16:08:27.160] Jun 23 15:39:40.005: INFO: At 2021-06-23 15:38:35 +0000 UTC - event for non-gu-pod: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container non-gu-container
W0623 16:08:27.161] Jun 23 15:39:40.005: INFO: At 2021-06-23 15:38:35 +0000 UTC - event for non-gu-pod: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container non-gu-container
W0623 16:08:27.161] Jun 23 15:39:40.005: INFO: At 2021-06-23 15:38:36 +0000 UTC - event for non-gu-pod: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Killing: Stopping container non-gu-container
W0623 16:08:27.161] Jun 23 15:39:40.005: INFO: At 2021-06-23 15:39:32 +0000 UTC - event for gu-pod: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} TopologyAffinityError: Resources cannot be allocated with Topology locality
W0623 16:08:27.161] Jun 23 15:39:40.007: INFO: POD     NODE                                                             PHASE   GRACE  CONDITIONS
W0623 16:08:27.161] Jun 23 15:39:40.007: INFO: gu-pod  n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8  Failed         []
W0623 16:08:27.161] Jun 23 15:39:40.007: INFO: 
W0623 16:08:27.161] Jun 23 15:39:40.009: INFO: 
W0623 16:08:27.162] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.169] Jun 23 15:39:40.011: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 3416 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:36:27 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:38:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-p82t9,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:39:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:39:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:39:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:39:38 +0000 UTC,LastTransitionTime:2021-06-23 15:39:38 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-p82t9,UID:db7809e3-be30-4128-be16-3c884bee290a,ResourceVersion:3406,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-p82t9,UID:db7809e3-be30-4128-be16-3c884bee290a,ResourceVersion:3406,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:27.169] Jun 23 15:39:40.011: INFO: 
W0623 16:08:27.169] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.169] Jun 23 15:39:40.013: INFO: 
W0623 16:08:27.169] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.170] Jun 23 15:39:40.016: INFO: gu-pod started at 2021-06-23 15:39:32 +0000 UTC (0+0 container statuses recorded)
W0623 16:08:27.170] W0623 15:39:40.017971    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
... skipping 16 lines ...
W0623 16:08:27.172] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
W0623 16:08:27.172]   With kubeconfig updated to static CPU Manager policy run the Topology Manager tests
W0623 16:08:27.173]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:979
W0623 16:08:27.173]     run Topology Manager policy test suite [It]
W0623 16:08:27.173]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:888
W0623 16:08:27.173] 
W0623 16:08:27.173]     Unexpected error:
W0623 16:08:27.173]         <*errors.errorString | 0xc0003bc560>: {
W0623 16:08:27.173]             s: "pod ran to completion",
W0623 16:08:27.173]         }
W0623 16:08:27.173]         pod ran to completion
W0623 16:08:27.173]     occurred
W0623 16:08:27.173] 
... skipping 43 lines ...
W0623 16:08:27.179] I0623 15:40:15.438860    2494 util.go:247] new configuration has taken effect
W0623 16:08:27.180] STEP: Found 0 events.
W0623 16:08:27.180] Jun 23 15:40:15.444: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
W0623 16:08:27.180] Jun 23 15:40:15.444: INFO: 
W0623 16:08:27.180] Jun 23 15:40:15.446: INFO: 
W0623 16:08:27.180] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.187] Jun 23 15:40:15.448: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 3465 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:36:27 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:38:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-m7nzh,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:40:11 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:40:11 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:40:11 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:40:11 +0000 UTC,LastTransitionTime:2021-06-23 15:40:11 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-m7nzh,UID:48090160-6a8f-4ceb-bdb0-e1c28116a536,ResourceVersion:3450,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-m7nzh,UID:48090160-6a8f-4ceb-bdb0-e1c28116a536,ResourceVersion:3450,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:27.188] Jun 23 15:40:15.449: INFO: 
W0623 16:08:27.188] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.188] Jun 23 15:40:15.450: INFO: 
W0623 16:08:27.188] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.189] W0623 15:40:15.456709    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
W0623 16:08:27.189] W0623 15:40:15.456729    2494 metrics_grabber.go:107] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled.
... skipping 15 lines ...
W0623 16:08:27.192]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/quota_lsci_test.go:57
W0623 16:08:27.193]     
W0623 16:08:27.193]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
W0623 16:08:27.193]       should eventually evict all of the correct pods [BeforeEach]
W0623 16:08:27.193]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
W0623 16:08:27.193] 
W0623 16:08:27.193]       Unexpected error:
W0623 16:08:27.194]           <*exec.ExitError | 0xc000465140>: {
W0623 16:08:27.194]               ProcessState: {
W0623 16:08:27.194]                   pid: 32708,
W0623 16:08:27.194]                   status: 256,
W0623 16:08:27.194]                   rusage: {
W0623 16:08:27.194]                       Utime: {Sec: 0, Usec: 25319},
... skipping 65 lines ...
W0623 16:08:27.206] STEP: back to "Node.Spec.ConfigSource is nil" from "correct"
W0623 16:08:27.207] I0623 15:40:36.678220    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.207] I0623 15:40:47.694442    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.207] I0623 15:40:47.739261    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.207] I0623 15:40:47.739283    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.208] I0623 15:40:48.740326    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.208] STEP: from "Node.Spec.ConfigSource is nil" to "fail-parse"
W0623 16:08:27.208] I0623 15:41:00.759225    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.208] I0623 15:41:00.805658    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.208] I0623 15:41:00.805683    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.208] I0623 15:41:01.806973    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.209] STEP: back to "Node.Spec.ConfigSource is nil" from "fail-parse"
W0623 16:08:27.209] I0623 15:41:11.823462    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.209] I0623 15:41:11.868017    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.209] I0623 15:41:11.868040    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.209] I0623 15:41:12.869595    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.210] STEP: from "Node.Spec.ConfigSource is nil" to "fail-validate"
W0623 16:08:27.210] I0623 15:41:23.887342    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.210] I0623 15:41:23.931941    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.210] I0623 15:41:23.931964    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.210] I0623 15:41:24.935957    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.211] STEP: back to "Node.Spec.ConfigSource is nil" from "fail-validate"
W0623 16:08:27.211] I0623 15:41:35.954868    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.211] I0623 15:41:35.998968    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.211] I0623 15:41:35.998989    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.211] STEP: setting initial state "Node.Spec.ConfigSource has all nil subfields"
W0623 16:08:27.212] STEP: from "Node.Spec.ConfigSource has all nil subfields" to "Node.Spec.ConfigSource.ConfigMap is missing namespace"
W0623 16:08:27.212] STEP: back to "Node.Spec.ConfigSource has all nil subfields" from "Node.Spec.ConfigSource.ConfigMap is missing namespace"
... skipping 15 lines ...
W0623 16:08:27.216] STEP: from "Node.Spec.ConfigSource has all nil subfields" to "correct"
W0623 16:08:27.216] I0623 15:41:48.018768    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.216] I0623 15:41:48.062950    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.216] I0623 15:41:48.062975    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.216] STEP: back to "Node.Spec.ConfigSource has all nil subfields" from "correct"
W0623 16:08:27.217] I0623 15:41:49.069180    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.217] STEP: from "Node.Spec.ConfigSource has all nil subfields" to "fail-parse"
W0623 16:08:27.217] I0623 15:42:00.086445    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.217] I0623 15:42:00.138998    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.217] I0623 15:42:00.139016    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.218] I0623 15:42:01.153508    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.218] STEP: back to "Node.Spec.ConfigSource has all nil subfields" from "fail-parse"
W0623 16:08:27.218] STEP: from "Node.Spec.ConfigSource has all nil subfields" to "fail-validate"
W0623 16:08:27.218] I0623 15:42:12.168770    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.218] I0623 15:42:12.212929    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.219] I0623 15:42:12.212951    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.219] I0623 15:42:13.214126    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.219] STEP: back to "Node.Spec.ConfigSource has all nil subfields" from "fail-validate"
W0623 16:08:27.219] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap is missing namespace"
W0623 16:08:27.219] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "Node.Spec.ConfigSource.ConfigMap is missing name"
W0623 16:08:27.220] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing namespace" from "Node.Spec.ConfigSource.ConfigMap is missing name"
W0623 16:08:27.220] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey"
W0623 16:08:27.220] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing namespace" from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey"
W0623 16:08:27.220] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
... skipping 9 lines ...
W0623 16:08:27.223] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "correct"
W0623 16:08:27.223] I0623 15:42:24.231215    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.223] I0623 15:42:24.283940    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.223] I0623 15:42:24.283964    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.224] I0623 15:42:25.285222    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.224] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing namespace" from "correct"
W0623 16:08:27.224] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "fail-parse"
W0623 16:08:27.224] I0623 15:42:37.303885    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.224] I0623 15:42:37.348355    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.224] I0623 15:42:37.348379    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.225] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing namespace" from "fail-parse"
W0623 16:08:27.225] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "fail-validate"
W0623 16:08:27.225] I0623 15:42:38.350676    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.225] I0623 15:42:49.366336    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.225] I0623 15:42:49.416154    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.225] I0623 15:42:49.416172    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.226] I0623 15:42:50.427911    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.226] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing namespace" from "fail-validate"
W0623 16:08:27.226] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap is missing name"
W0623 16:08:27.226] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey"
W0623 16:08:27.226] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing name" from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey"
W0623 16:08:27.226] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
W0623 16:08:27.226] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing name" from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
W0623 16:08:27.227] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
... skipping 7 lines ...
W0623 16:08:27.228] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "correct"
W0623 16:08:27.228] I0623 15:43:01.444877    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.228] I0623 15:43:01.490614    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.228] I0623 15:43:01.490638    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.229] I0623 15:43:02.491865    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.229] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing name" from "correct"
W0623 16:08:27.229] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "fail-parse"
W0623 16:08:27.229] I0623 15:43:12.507178    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.229] I0623 15:43:12.551053    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.229] I0623 15:43:12.551234    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.229] I0623 15:43:13.552634    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.229] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing name" from "fail-parse"
W0623 16:08:27.230] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "fail-validate"
W0623 16:08:27.230] I0623 15:43:24.570037    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.230] I0623 15:43:24.616614    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.230] I0623 15:43:24.616647    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.230] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing name" from "fail-validate"
W0623 16:08:27.230] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey"
W0623 16:08:27.230] I0623 15:43:25.623155    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.231] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
W0623 16:08:27.231] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
W0623 16:08:27.231] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
W0623 16:08:27.231] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
... skipping 6 lines ...
W0623 16:08:27.232] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" to "correct"
W0623 16:08:27.232] I0623 15:43:36.639515    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.233] I0623 15:43:36.683952    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.233] I0623 15:43:36.683974    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.233] I0623 15:43:37.685101    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.233] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" from "correct"
W0623 16:08:27.233] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" to "fail-parse"
W0623 16:08:27.233] I0623 15:43:48.703223    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.233] I0623 15:43:48.747572    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.234] I0623 15:43:48.747595    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.234] I0623 15:43:49.748887    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.234] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" from "fail-parse"
W0623 16:08:27.234] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" to "fail-validate"
W0623 16:08:27.234] I0623 15:44:00.763872    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.234] I0623 15:44:00.807978    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.234] I0623 15:44:00.808002    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.234] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" from "fail-validate"
W0623 16:08:27.235] I0623 15:44:01.820144    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.235] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
W0623 16:08:27.235] STEP: from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
W0623 16:08:27.235] STEP: back to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
W0623 16:08:27.235] STEP: from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" to "Node.Spec.ConfigSource.ConfigMap has invalid namespace"
W0623 16:08:27.236] STEP: back to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" from "Node.Spec.ConfigSource.ConfigMap has invalid namespace"
... skipping 4 lines ...
W0623 16:08:27.236] STEP: from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" to "correct"
W0623 16:08:27.236] I0623 15:44:12.835002    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.237] I0623 15:44:12.880215    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.237] I0623 15:44:12.880240    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.237] I0623 15:44:13.881360    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.237] STEP: back to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" from "correct"
W0623 16:08:27.237] STEP: from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" to "fail-parse"
W0623 16:08:27.237] I0623 15:44:23.896943    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.237] I0623 15:44:23.941958    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.237] I0623 15:44:23.941982    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.238] I0623 15:44:24.943143    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.238] STEP: back to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" from "fail-parse"
W0623 16:08:27.238] STEP: from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" to "fail-validate"
W0623 16:08:27.238] I0623 15:44:34.959721    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.238] I0623 15:44:35.003934    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.238] I0623 15:44:35.003957    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.238] STEP: back to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" from "fail-validate"
W0623 16:08:27.239] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
W0623 16:08:27.239] STEP: from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" to "Node.Spec.ConfigSource.ConfigMap has invalid namespace"
W0623 16:08:27.239] STEP: back to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" from "Node.Spec.ConfigSource.ConfigMap has invalid namespace"
W0623 16:08:27.239] STEP: from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" to "Node.Spec.ConfigSource.ConfigMap has invalid name"
W0623 16:08:27.240] STEP: back to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" from "Node.Spec.ConfigSource.ConfigMap has invalid name"
W0623 16:08:27.240] STEP: from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
... skipping 2 lines ...
W0623 16:08:27.240] I0623 15:44:36.015624    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.240] I0623 15:44:46.030413    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.241] I0623 15:44:46.077599    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.241] I0623 15:44:46.077624    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.241] I0623 15:44:47.080965    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.241] STEP: back to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" from "correct"
W0623 16:08:27.241] STEP: from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" to "fail-parse"
W0623 16:08:27.241] I0623 15:44:58.097220    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.241] I0623 15:44:58.142267    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.242] I0623 15:44:58.142290    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.242] I0623 15:44:59.144286    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.242] STEP: back to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" from "fail-parse"
W0623 16:08:27.242] STEP: from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" to "fail-validate"
W0623 16:08:27.242] I0623 15:45:09.160458    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.242] I0623 15:45:09.204503    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.242] I0623 15:45:09.204526    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.242] I0623 15:45:10.205585    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.243] STEP: back to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" from "fail-validate"
W0623 16:08:27.243] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap has invalid namespace"
W0623 16:08:27.243] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid namespace" to "Node.Spec.ConfigSource.ConfigMap has invalid name"
W0623 16:08:27.243] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid namespace" from "Node.Spec.ConfigSource.ConfigMap has invalid name"
W0623 16:08:27.243] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid namespace" to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
W0623 16:08:27.243] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid namespace" from "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
W0623 16:08:27.244] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid namespace" to "correct"
W0623 16:08:27.244] I0623 15:45:20.221416    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.244] I0623 15:45:20.266199    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.244] I0623 15:45:20.266221    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.244] I0623 15:45:21.267878    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.244] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid namespace" from "correct"
W0623 16:08:27.244] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid namespace" to "fail-parse"
W0623 16:08:27.244] I0623 15:45:32.287129    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.245] I0623 15:45:32.331937    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.245] I0623 15:45:32.331960    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.245] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid namespace" from "fail-parse"
W0623 16:08:27.245] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid namespace" to "fail-validate"
W0623 16:08:27.245] I0623 15:45:33.333050    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.245] I0623 15:45:44.351135    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.246] I0623 15:45:44.396211    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.246] I0623 15:45:44.396243    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.246] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid namespace" from "fail-validate"
W0623 16:08:27.246] I0623 15:45:45.406736    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.246] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap has invalid name"
W0623 16:08:27.246] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid name" to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
W0623 16:08:27.246] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid name" from "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
W0623 16:08:27.247] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid name" to "correct"
W0623 16:08:27.247] I0623 15:45:56.421534    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.247] I0623 15:45:56.466004    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.247] I0623 15:45:56.466029    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.247] I0623 15:45:57.467596    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.247] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid name" from "correct"
W0623 16:08:27.247] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid name" to "fail-parse"
W0623 16:08:27.248] I0623 15:46:07.500601    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.248] I0623 15:46:07.558755    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.248] I0623 15:46:07.558778    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.248] I0623 15:46:08.560005    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.248] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid name" from "fail-parse"
W0623 16:08:27.248] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid name" to "fail-validate"
W0623 16:08:27.248] I0623 15:46:19.575717    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.248] I0623 15:46:19.619246    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.249] I0623 15:46:19.619268    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.249] I0623 15:46:20.620553    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.249] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid name" from "fail-validate"
W0623 16:08:27.249] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
W0623 16:08:27.249] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" to "correct"
W0623 16:08:27.249] I0623 15:46:30.635805    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.249] I0623 15:46:30.680234    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.249] I0623 15:46:30.680257    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.250] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" from "correct"
W0623 16:08:27.250] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" to "fail-parse"
W0623 16:08:27.250] I0623 15:46:31.681876    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.250] I0623 15:46:42.701885    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.250] I0623 15:46:42.746133    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.250] I0623 15:46:42.746157    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.250] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" from "fail-parse"
W0623 16:08:27.250] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" to "fail-validate"
W0623 16:08:27.251] I0623 15:46:43.747294    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.251] I0623 15:46:54.763431    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.251] I0623 15:46:54.815015    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.251] I0623 15:46:54.815031    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.251] I0623 15:46:55.816476    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.251] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" from "fail-validate"
W0623 16:08:27.251] STEP: setting initial state "correct"
W0623 16:08:27.251] I0623 15:47:07.833208    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.252] I0623 15:47:07.877028    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.252] I0623 15:47:07.877051    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.252] I0623 15:47:08.878907    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.252] STEP: from "correct" to "fail-parse"
W0623 16:08:27.252] I0623 15:47:18.893342    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.252] I0623 15:47:18.937626    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.252] I0623 15:47:18.937651    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.252] I0623 15:47:19.939418    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.252] STEP: back to "correct" from "fail-parse"
W0623 16:08:27.253] I0623 15:47:29.954159    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.253] I0623 15:47:29.999429    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.253] I0623 15:47:29.999452    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.253] I0623 15:47:31.001881    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.253] STEP: from "correct" to "fail-validate"
W0623 16:08:27.253] I0623 15:47:42.020164    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.253] I0623 15:47:42.064217    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.253] I0623 15:47:42.064240    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.254] STEP: back to "correct" from "fail-validate"
W0623 16:08:27.254] I0623 15:47:43.066127    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.254] I0623 15:47:54.082031    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.254] I0623 15:47:54.126454    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.254] I0623 15:47:54.126477    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.254] STEP: setting initial state "fail-parse"
W0623 16:08:27.254] I0623 15:47:55.128772    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.254] I0623 15:48:06.145517    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.254] I0623 15:48:06.190141    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.255] I0623 15:48:06.190164    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.255] I0623 15:48:07.191206    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.255] STEP: from "fail-parse" to "fail-validate"
W0623 16:08:27.255] I0623 15:48:17.204757    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.255] I0623 15:48:17.248805    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.255] I0623 15:48:17.248827    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.255] I0623 15:48:18.250170    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.256] STEP: back to "fail-parse" from "fail-validate"
W0623 16:08:27.256] I0623 15:48:28.265352    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.256] I0623 15:48:28.308931    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.256] I0623 15:48:28.308953    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.256] I0623 15:48:29.310121    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.256] STEP: setting initial state "fail-validate"
W0623 16:08:27.256] I0623 15:48:39.324662    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.256] I0623 15:48:39.368371    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.256] I0623 15:48:39.368402    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.257] I0623 15:48:40.369480    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.257] [AfterEach] 
W0623 16:08:27.257]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
... skipping 213 lines ...
W0623 16:08:27.289] STEP: Collecting events from namespace "device-plugin-errors-2648".
W0623 16:08:27.289] I0623 15:54:39.500746    2494 util.go:247] new configuration has taken effect
W0623 16:08:27.289] STEP: Found 7 events.
W0623 16:08:27.290] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:49:36 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
W0623 16:08:27.290] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:49:36 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c
W0623 16:08:27.290] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:49:37 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c
W0623 16:08:27.291] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:49:38 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} BackOff: Back-off restarting failed container
W0623 16:08:27.291] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:54:38 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
W0623 16:08:27.291] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:54:38 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c
W0623 16:08:27.292] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:54:39 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c
W0623 16:08:27.292] Jun 23 15:54:39.505: INFO: POD                                                      NODE                                                             PHASE    GRACE  CONDITIONS
W0623 16:08:27.293] Jun 23 15:54:39.505: INFO: device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c  n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:49:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:49:34 +0000 UTC ContainersNotReady containers with unready status: [device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:49:34 +0000 UTC ContainersNotReady containers with unready status: [device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:49:34 +0000 UTC  }]
W0623 16:08:27.293] Jun 23 15:54:39.505: INFO: 
W0623 16:08:27.293] Jun 23 15:54:39.507: INFO: 
W0623 16:08:27.293] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.301] Jun 23 15:54:39.508: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 4306 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:48:51 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:49:44 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/resource":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{},"f:example.com/resource":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-djqxx,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},example.com/resource: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},example.com/resource: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:54:37 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:54:37 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:54:37 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:54:37 +0000 UTC,LastTransitionTime:2021-06-23 15:54:37 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-device-plugin@sha256:3dd0413e5a78f1c2a6484f168ba3daf23ebb0b1141897237e9559db6c5f7101f k8s.gcr.io/e2e-test-images/sample-device-plugin@sha256:e84f6ca27c51ddedf812637dd2bcf771ad69fdca1173e5690c372370d0f93c40 k8s.gcr.io/e2e-test-images/sample-device-plugin:1.3],SizeBytes:41740418,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-djqxx,UID:cca3cb49-568f-48ba-8ba0-9d96a119c432,ResourceVersion:4296,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-djqxx,UID:cca3cb49-568f-48ba-8ba0-9d96a119c432,ResourceVersion:4296,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:27.301] Jun 23 15:54:39.509: INFO: 
W0623 16:08:27.301] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.302] Jun 23 15:54:39.510: INFO: 
W0623 16:08:27.302] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.302] Jun 23 15:54:39.513: INFO: sample-device-plugin started at 2021-06-23 15:49:24 +0000 UTC (0+1 container statuses recorded)
W0623 16:08:27.302] Jun 23 15:54:39.513: INFO: 	Container sample-device-plugin ready: true, restart count 0
... skipping 18 lines ...
W0623 16:08:27.305] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
W0623 16:08:27.305]   DevicePlugin
W0623 16:08:27.305]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/device_plugin_test.go:114
W0623 16:08:27.305]     Verifies the Kubelet device plugin functionality. [It]
W0623 16:08:27.305]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/device_plugin_test.go:122
W0623 16:08:27.306] 
W0623 16:08:27.306]     Unexpected error:
W0623 16:08:27.306]         <*errors.errorString | 0xc00027ac30>: {
W0623 16:08:27.306]             s: "timed out waiting for the condition",
W0623 16:08:27.306]         }
W0623 16:08:27.306]         timed out waiting for the condition
W0623 16:08:27.306]     occurred
W0623 16:08:27.306] 
... skipping 19 lines ...
W0623 16:08:27.309] [It] should set pids.max for Pod
W0623 16:08:27.309]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/pids_test.go:89
W0623 16:08:27.309] STEP: by creating a G pod
W0623 16:08:27.309] I0623 15:55:04.625426    2494 util.go:247] new configuration has taken effect
W0623 16:08:27.309] STEP: checking if the expected pids settings were applied
W0623 16:08:27.309] Jun 23 15:55:04.644: INFO: Pod to run command: expected=1024; actual=$(cat /tmp//kubepods.slice/kubepods-pod9e664557_34ad_4d36_b5a1_54c6e275542a.slice/pids.max); if [ "$expected" -ne "$actual" ]; then exit 1; fi; 
W0623 16:08:27.310] Jun 23 15:55:04.653: INFO: Waiting up to 5m0s for pod "pod02e0c014-f8c5-42ce-89aa-5a7abae418a4" in namespace "pids-limit-test-6184" to be "Succeeded or Failed"
W0623 16:08:27.310] Jun 23 15:55:04.665: INFO: Pod "pod02e0c014-f8c5-42ce-89aa-5a7abae418a4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.224861ms
W0623 16:08:27.310] Jun 23 15:55:06.670: INFO: Pod "pod02e0c014-f8c5-42ce-89aa-5a7abae418a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01679955s
W0623 16:08:27.310] Jun 23 15:55:08.673: INFO: Pod "pod02e0c014-f8c5-42ce-89aa-5a7abae418a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020105267s
W0623 16:08:27.310] STEP: Saw pod success
W0623 16:08:27.310] Jun 23 15:55:08.673: INFO: Pod "pod02e0c014-f8c5-42ce-89aa-5a7abae418a4" satisfied condition "Succeeded or Failed"
W0623 16:08:27.310] [AfterEach] With config updated with pids limits
W0623 16:08:27.310]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:175
W0623 16:08:27.311] I0623 15:55:13.259536    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.311] I0623 15:55:13.305913    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.311] I0623 15:55:13.305937    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.311] [AfterEach] [sig-node] PodPidsLimit [Serial]
... skipping 34 lines ...
W0623 16:08:27.315] I0623 15:55:36.457234    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.315] STEP: setting initial state "correct"
W0623 16:08:27.315] I0623 15:55:37.458982    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.315] I0623 15:55:47.474195    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.315] I0623 15:55:47.519224    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.316] I0623 15:55:47.519248    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.316] STEP: from "correct" to "fail-parse"
W0623 16:08:27.316] I0623 15:55:48.521049    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.316] I0623 15:55:58.536133    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.316] I0623 15:55:58.580391    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.316] I0623 15:55:58.580415    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.316] I0623 15:55:59.582555    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.316] STEP: back to "correct" from "fail-parse"
W0623 16:08:27.317] I0623 15:56:10.597404    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.317] I0623 15:56:10.641559    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.317] I0623 15:56:10.641584    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.317] I0623 15:56:11.642616    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.317] STEP: from "correct" to "fail-validate"
W0623 16:08:27.317] I0623 15:56:22.660448    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.317] I0623 15:56:22.704933    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.317] I0623 15:56:22.704956    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.317] STEP: back to "correct" from "fail-validate"
W0623 16:08:27.318] I0623 15:56:23.718928    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.318] I0623 15:56:33.736278    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.318] I0623 15:56:33.780908    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.318] I0623 15:56:33.780931    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.318] STEP: setting initial state "fail-parse"
W0623 16:08:27.318] I0623 15:56:34.831604    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.318] I0623 15:56:45.847004    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.318] I0623 15:56:45.897057    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.319] I0623 15:56:45.898292    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.319] I0623 15:56:46.900122    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.319] STEP: from "fail-parse" to "fail-validate"
W0623 16:08:27.319] I0623 15:56:57.916061    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.319] I0623 15:56:57.960908    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.319] I0623 15:56:57.960933    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.319] I0623 15:56:58.961965    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.319] STEP: back to "fail-parse" from "fail-validate"
W0623 16:08:27.320] I0623 15:57:09.993725    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.320] I0623 15:57:10.040724    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.320] I0623 15:57:10.040748    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.320] I0623 15:57:11.041750    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.320] STEP: setting initial state "fail-validate"
W0623 16:08:27.320] I0623 15:57:22.056450    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.320] I0623 15:57:22.100188    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.320] I0623 15:57:22.100212    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.321] I0623 15:57:23.101239    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.321] [AfterEach] 
W0623 16:08:27.321]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
... skipping 73 lines ...
W0623 16:08:27.331] I0623 15:58:12.897434    2494 util.go:247] new configuration has taken effect
W0623 16:08:27.331] STEP: Found 0 events.
W0623 16:08:27.331] Jun 23 15:58:12.902: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
W0623 16:08:27.331] Jun 23 15:58:12.902: INFO: 
W0623 16:08:27.331] Jun 23 15:58:12.904: INFO: 
W0623 16:08:27.331] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.339] Jun 23 15:58:12.905: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 4712 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:57:34 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:57:46 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/resource":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{},"f:example.com/resource":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qhm6m,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},example.com/resource: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},example.com/resource: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:58:10 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:58:10 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:58:10 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:58:10 +0000 UTC,LastTransitionTime:2021-06-23 15:57:59 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-device-plugin@sha256:3dd0413e5a78f1c2a6484f168ba3daf23ebb0b1141897237e9559db6c5f7101f k8s.gcr.io/e2e-test-images/sample-device-plugin@sha256:e84f6ca27c51ddedf812637dd2bcf771ad69fdca1173e5690c372370d0f93c40 k8s.gcr.io/e2e-test-images/sample-device-plugin:1.3],SizeBytes:41740418,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qhm6m,UID:606667a9-6f8b-4e32-b539-b55bf666d41a,ResourceVersion:4701,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qhm6m,UID:606667a9-6f8b-4e32-b539-b55bf666d41a,ResourceVersion:4701,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
W0623 16:08:27.339] Jun 23 15:58:12.906: INFO: 
W0623 16:08:27.339] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.339] Jun 23 15:58:12.907: INFO: 
W0623 16:08:27.340] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
W0623 16:08:27.340] Jun 23 15:58:12.910: INFO: sample-device-plugin started at 2021-06-23 15:49:24 +0000 UTC (0+1 container statuses recorded)
W0623 16:08:27.340] Jun 23 15:58:12.910: INFO: 	Container sample-device-plugin ready: true, restart count 0
... skipping 17 lines ...
W0623 16:08:27.343]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:75
W0623 16:08:27.343]     
W0623 16:08:27.343]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
W0623 16:08:27.343]       should eventually evict all of the correct pods [BeforeEach]
W0623 16:08:27.343]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
W0623 16:08:27.343] 
W0623 16:08:27.343]       Unexpected error:
W0623 16:08:27.343]           <*exec.ExitError | 0xc000b41980>: {
W0623 16:08:27.343]               ProcessState: {
W0623 16:08:27.343]                   pid: 43415,
W0623 16:08:27.344]                   status: 256,
W0623 16:08:27.344]                   rusage: {
W0623 16:08:27.344]                       Utime: {Sec: 0, Usec: 25010},
... skipping 1172 lines ...
W0623 16:08:27.547] I0623 16:06:24.032990    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.547] I0623 16:06:24.896685    2494 util.go:247] new configuration has taken effect
W0623 16:08:27.548] I0623 16:06:25.040028    2494 server.go:182] Initial health check passed for service "kubelet"
W0623 16:08:27.548] I0623 16:06:26.040469    2494 server.go:222] Restarting server "kubelet" with restart command
W0623 16:08:27.548] I0623 16:06:26.084942    2494 server.go:171] Running health check for service "kubelet"
W0623 16:08:27.548] I0623 16:06:26.084967    2494 util.go:48] Running readiness check for service "kubelet"
W0623 16:08:27.549] W0623 16:06:27.085468    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.549] W0623 16:06:28.085900    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.549] W0623 16:06:29.086350    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.549] W0623 16:06:30.087397    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.550] W0623 16:06:31.087795    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.550] W0623 16:06:32.088633    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.550] W0623 16:06:33.090113    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.551] W0623 16:06:34.090504    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.551] W0623 16:06:35.092115    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.551] W0623 16:06:36.092603    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.551] W0623 16:06:37.093507    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.552] W0623 16:06:38.093969    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.552] W0623 16:06:39.094408    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.552] W0623 16:06:40.095255    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.553] W0623 16:06:41.095624    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.553] W0623 16:06:42.096060    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.553] W0623 16:06:43.096443    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.553] W0623 16:06:44.096867    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.554] W0623 16:06:45.097958    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.554] W0623 16:06:46.099177    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.554] W0623 16:06:47.099583    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.554] W0623 16:06:48.100021    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.555] W0623 16:06:49.100426    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.555] W0623 16:06:50.101439    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.555] W0623 16:06:51.101870    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.555] W0623 16:06:52.102343    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.556] W0623 16:06:53.102878    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.556] W0623 16:06:54.103415    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.556] W0623 16:06:55.104455    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.556] W0623 16:06:56.104930    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.556] W0623 16:06:57.105346    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.557] W0623 16:06:58.105781    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.557] W0623 16:06:59.106192    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.557] W0623 16:07:00.107219    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.557] W0623 16:07:01.108335    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.557] W0623 16:07:02.109410    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.558] W0623 16:07:03.109805    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.558] W0623 16:07:04.110884    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.558] W0623 16:07:05.111602    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.558] W0623 16:07:06.112231    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.558] W0623 16:07:07.112610    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.559] W0623 16:07:08.113457    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.559] W0623 16:07:09.113906    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.559] W0623 16:07:10.115206    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.559] W0623 16:07:11.116476    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.560] W0623 16:07:12.116912    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.560] W0623 16:07:13.117347    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.560] W0623 16:07:14.118452    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.560] W0623 16:07:15.119499    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.560] W0623 16:07:16.120198    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.561] W0623 16:07:17.120597    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.561] W0623 16:07:18.121004    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.561] W0623 16:07:19.121392    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.562] W0623 16:07:20.122175    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.562] W0623 16:07:21.122559    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.562] W0623 16:07:22.123064    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.562] W0623 16:07:23.123462    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.563] W0623 16:07:24.123957    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.563] Jun 23 16:07:24.910: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3549b1f6-dad3-4a9c-ad85-778a9eb5f763] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:24 GMT]] Body:0xc000908200 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001fdd6b0}
W0623 16:08:27.564] W0623 16:07:25.124975    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.564] W0623 16:07:26.125504    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.564] W0623 16:07:27.125947    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.564] W0623 16:07:28.127256    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.565] W0623 16:07:29.128439    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.565] Jun 23 16:07:29.922: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[994fde4b-88ec-45e6-a884-52bfa6c6da19] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:29 GMT]] Body:0xc000908640 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001fddc30}
W0623 16:08:27.565] W0623 16:07:30.128834    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.566] W0623 16:07:31.129546    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.566] W0623 16:07:32.130115    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.566] W0623 16:07:33.130522    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.567] W0623 16:07:34.130944    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.567] Jun 23 16:07:34.920: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf3cf970-2910-4d38-a156-139e79841d56] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:34 GMT]] Body:0xc000908a80 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc000efa210}
W0623 16:08:27.567] W0623 16:07:35.131502    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.568] W0623 16:07:36.132461    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.568] W0623 16:07:37.132947    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.568] W0623 16:07:38.133364    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.569] W0623 16:07:39.134029    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.569] Jun 23 16:07:39.921: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd7aa330-ef44-4ca2-b043-b9d4b6505bc2] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:39 GMT]] Body:0xc000908f80 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc000efa790}
W0623 16:08:27.569] W0623 16:07:40.134414    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.570] W0623 16:07:41.135489    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.570] W0623 16:07:42.136278    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.570] W0623 16:07:43.137603    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.571] W0623 16:07:44.138323    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.571] Jun 23 16:07:44.920: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2a95fef7-2db2-4432-8235-f9adcdb23a4c] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:44 GMT]] Body:0xc000909500 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc000efae70}
W0623 16:08:27.571] W0623 16:07:45.139327    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.572] W0623 16:07:46.140533    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.572] W0623 16:07:47.140952    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.572] W0623 16:07:48.141361    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.573] W0623 16:07:49.141771    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.573] Jun 23 16:07:49.920: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c9e66292-cae8-46c8-ae20-9a9dcaa11150] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:49 GMT]] Body:0xc0009099c0 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc000efb3f0}
W0623 16:08:27.573] W0623 16:07:50.142809    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.574] W0623 16:07:51.143510    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.574] W0623 16:07:52.143955    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.574] W0623 16:07:53.144468    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.575] W0623 16:07:54.144923    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.575] Jun 23 16:07:54.922: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f81b70c-d3fb-4452-becd-34709d3b0bf9] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:54 GMT]] Body:0xc000909e40 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc000efbad0}
W0623 16:08:27.575] W0623 16:07:55.145851    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.576] W0623 16:07:56.146615    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.576] W0623 16:07:57.147199    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.576] W0623 16:07:58.147567    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.577] W0623 16:07:59.148197    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.577] Jun 23 16:07:59.919: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f8e9530a-2ed6-40ae-b752-8779299ece1c] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:59 GMT]] Body:0xc000ef82c0 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c220b0}
W0623 16:08:27.577] W0623 16:08:00.149120    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.578] W0623 16:08:01.150510    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.578] W0623 16:08:02.150943    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.578] W0623 16:08:03.151385    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.579] W0623 16:08:04.151868    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.579] Jun 23 16:08:04.919: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e734d606-ff8c-4a77-9888-ced09f4b4f5a] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:08:04 GMT]] Body:0xc000ef8600 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c22630}
W0623 16:08:27.579] W0623 16:08:05.152354    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.580] W0623 16:08:06.152755    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.580] W0623 16:08:07.153477    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.580] W0623 16:08:08.153935    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.581] W0623 16:08:09.154426    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.581] Jun 23 16:08:09.921: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[41a35edf-c653-46ff-823c-88525cff6db1] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:08:09 GMT]] Body:0xc000ef89c0 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c22c60}
W0623 16:08:27.581] W0623 16:08:10.154901    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.582] W0623 16:08:11.155324    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.582] W0623 16:08:12.155850    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.582] W0623 16:08:13.156355    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.583] W0623 16:08:14.157194    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.583] Jun 23 16:08:14.921: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[325b40c4-75dc-45e0-a303-ce4d9fd347ba] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:08:14 GMT]] Body:0xc000ef8e40 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c231e0}
W0623 16:08:27.584] W0623 16:08:15.158191    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.584] W0623 16:08:16.159439    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.584] W0623 16:08:17.160436    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.585] W0623 16:08:18.160894    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.585] W0623 16:08:19.161242    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.586] Jun 23 16:08:19.920: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4435d285-668e-43fe-960c-50c747899910] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:08:19 GMT]] Body:0xc000ef9240 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c23810}
W0623 16:08:27.586] W0623 16:08:20.161688    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.586] W0623 16:08:21.162766    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.586] W0623 16:08:22.163203    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.587] W0623 16:08:23.164062    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.587] W0623 16:08:24.164506    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.588] Jun 23 16:08:24.922: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7e856ab-4203-44f9-9092-747d19abb473] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:08:24 GMT]] Body:0xc000ef96c0 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c23d90}
W0623 16:08:27.588] W0623 16:08:25.165247    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.588] W0623 16:08:26.165640    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W0623 16:08:27.589] F0623 16:08:26.165684    2494 server.go:180] Restart loop readinessCheck failed for server "kubelet" start-command: `/usr/bin/systemd-run -p Delegate=true --unit=kubelet-20210623T140232.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20210623T140232/kubelet --kubeconfig /tmp/node-e2e-20210623T140232/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20210623T140232/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20210623T140232/cni/bin --cni-conf-dir /tmp/node-e2e-20210623T140232/cni/net.d --cni-cache-dir /tmp/node-e2e-20210623T140232/cni/cache --hostname-override n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 --container-runtime remote --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20210623T140232/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0`, kill-command: `/usr/bin/systemctl kill kubelet-20210623T140232.service`, restart-command: `/usr/bin/systemctl restart kubelet-20210623T140232.service`, health-check: [http://127.0.0.1:10255/healthz], output-file: "kubelet.log"
W0623 16:08:27.589] goroutine 228 [running]:
W0623 16:08:27.590] k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc002044a80, 0x54d, 0x973)
W0623 16:08:27.590] 	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
W0623 16:08:27.590] k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x8c18840, 0xc000000003, 0x0, 0x0, 0xc0009a4f50, 0x0, 0x738f4d5, 0x9, 0xb4, 0x0)
W0623 16:08:27.590] 	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
W0623 16:08:27.591] k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printf(0x8c18840, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x595d0a7, 0x29, 0xc0007d4b00, 0x1, ...)
... skipping 59 lines ...
W0623 16:08:27.602] k8s.io/kubernetes/test/e2e_node.setKubeletConfiguration(0xc000ad4f20, 0xc001196000, 0x0, 0x43ad5b)
W0623 16:08:27.602] 	_output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:207 +0x45
W0623 16:08:27.602] k8s.io/kubernetes/test/e2e_node.runTest.func1(0xc001196000, 0xc000ad4f20)
W0623 16:08:27.602] 	_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_container_manager_test.go:175 +0x45
W0623 16:08:27.603] panic(0x4b1ebc0, 0x60a7910)
W0623 16:08:27.603] 	/usr/local/go/src/runtime/panic.go:971 +0x499
W0623 16:08:27.603] k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc00215e1e0, 0xe2, 0xc00122ade8, 0x1, 0x1)
W0623 16:08:27.603] 	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
W0623 16:08:27.603] k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).match.func1(0x58a7db2, 0x9)
W0623 16:08:27.604] 	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:134 +0x373
W0623 16:08:27.604] k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).match(0xc001593b80, 0x61e0fd8, 0x8c48d78, 0x12a05f201, 0x0, 0x0, 0x0, 0x989680)
W0623 16:08:27.604] 	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:156 +0x411
W0623 16:08:27.604] k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).Should(0xc001593b80, 0x61e0fd8, 0x8c48d78, 0x0, 0x0, 0x0, 0x6188368)
... skipping 16419 lines ...
W0623 16:08:29.580] net/http.(*persistConn).writeLoop(0xc00151d680)
W0623 16:08:29.580] 	/usr/local/go/src/net/http/transport.go:2382 +0xf7
W0623 16:08:29.580] created by net/http.(*Transport).dialConn
W0623 16:08:29.580] 	/usr/local/go/src/net/http/transport.go:1744 +0xc9c
W0623 16:08:29.580] 
W0623 16:08:29.580] Ginkgo ran 1 suite in 2h5m41.522161206s
W0623 16:08:29.581] Test Suite Failed
W0623 16:08:29.581] , err: exit status 1
W0623 16:08:29.581] I0623 16:08:29.477329    6143 remote.go:198] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0623 16:08:29.581] I0623 16:08:29.477515    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo sh -c 'journalctl --system --all > /tmp/20210623T160829-system.log']
W0623 16:08:37.392] I0623 16:08:37.392087    6143 remote.go:203] Got the system logs from journald; copying it back...
W0623 16:08:37.393] I0623 16:08:37.392320    6143 ssh.go:113] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26:/tmp/20210623T160829-system.log /workspace/_artifacts/n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8-system.log]
W0623 16:08:40.208] I0623 16:08:40.207926    6143 remote.go:123] Copying test artifacts from "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8"
W0623 16:08:40.209] I0623 16:08:40.208318    6143 ssh.go:113] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r core@34.145.49.26:/tmp/node-e2e-20210623T140232/results/*.log /workspace/_artifacts/n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8]
W0623 16:08:41.333] I0623 16:08:41.332864    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo ls /tmp/node-e2e-20210623T140232/results/*.json]
W0623 16:08:41.946] E0623 16:08:41.946351    6143 ssh.go:116] failed to run SSH command: out: ls: cannot access '/tmp/node-e2e-20210623T140232/results/*.json': No such file or directory
W0623 16:08:41.947] , err: exit status 2
W0623 16:08:41.947] I0623 16:08:41.946650    6143 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo ls /tmp/node-e2e-20210623T140232/results/junit*]
W0623 16:08:42.690] E0623 16:08:42.689809    6143 ssh.go:116] failed to run SSH command: out: ls: cannot access '/tmp/node-e2e-20210623T140232/results/junit*': No such file or directory
W0623 16:08:42.690] , err: exit status 2
W0623 16:08:43.422] I0623 16:08:43.422054    6143 run_remote.go:856] Deleting instance "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8"
I0623 16:08:43.936] 
I0623 16:08:43.937] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0623 16:08:43.937] >                              START TEST                                >
I0623 16:08:43.937] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
... skipping 57 lines ...
I0623 16:08:43.947] I0623 14:02:45.192619    2494 image_list.go:166] Pre-pulling images with CRI [docker.io/nfvpe/sriov-device-plugin:v3.1 gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/e2e-test-images/agnhost:2.32 k8s.gcr.io/e2e-test-images/busybox:1.29-1 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 k8s.gcr.io/e2e-test-images/ipc-utils:1.2 k8s.gcr.io/e2e-test-images/nginx:1.14-1 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1 k8s.gcr.io/e2e-test-images/nonewprivs:1.3 k8s.gcr.io/e2e-test-images/nonroot:1.1 k8s.gcr.io/e2e-test-images/perl:5.26 k8s.gcr.io/e2e-test-images/volume/gluster:1.2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2 k8s.gcr.io/node-problem-detector:v0.6.2 k8s.gcr.io/pause:3.5 k8s.gcr.io/stress:v1]
I0623 16:08:43.947] I0623 14:05:50.204059    2494 e2e_node_suite_test.go:261] Locksmithd is masked successfully
I0623 16:08:43.948] I0623 14:05:50.204155    2494 server.go:102] Starting server "services" with command "/tmp/node-e2e-20210623T140232/e2e_node.test --run-services-mode --bearer-token=vaZqjuIJ2oF4zLZM --test.timeout=24h0m0s --ginkgo.seed=1624456964 --ginkgo.focus=\\[Serial\\] --ginkgo.skip=\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeAlphaFeature:.+\\] --ginkgo.slowSpecThreshold=5.00000 --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 --report-dir=/tmp/node-e2e-20210623T140232/results --report-prefix=fedora --image-description=fedora-coreos-34-20210529-3-0-gcp-x86-64 --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags=--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0 --extra-log={\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"
I0623 16:08:43.948] I0623 14:05:50.204195    2494 util.go:48] Running readiness check for service "services"
I0623 16:08:43.949] I0623 14:05:50.205216    2494 server.go:130] Output file for server "services": /tmp/node-e2e-20210623T140232/results/services.log
I0623 16:08:43.949] I0623 14:05:50.206188    2494 server.go:160] Waiting for server "services" start command to complete
I0623 16:08:43.949] W0623 14:05:55.407232    2494 util.go:106] Health check on "https://127.0.0.1:6443/healthz" failed, status=500
I0623 16:08:43.949] I0623 14:05:56.409449    2494 services.go:70] Node services started.
I0623 16:08:43.949] I0623 14:05:56.409468    2494 kubelet.go:100] Starting kubelet
I0623 16:08:43.950] I0623 14:05:56.409608    2494 feature_gate.go:243] feature gates: &{map[DynamicKubeletConfig:true LocalStorageCapacityIsolation:true]}
I0623 16:08:43.951] I0623 14:05:56.413341    2494 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run -p Delegate=true --unit=kubelet-20210623T140232.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20210623T140232/kubelet --kubeconfig /tmp/node-e2e-20210623T140232/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20210623T140232/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20210623T140232/cni/bin --cni-conf-dir /tmp/node-e2e-20210623T140232/cni/net.d --cni-cache-dir /tmp/node-e2e-20210623T140232/cni/cache --hostname-override n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 --container-runtime remote --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20210623T140232/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0"
I0623 16:08:43.951] I0623 14:05:56.413552    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:43.951] I0623 14:05:56.413726    2494 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20210623T140232/results/kubelet.log
I0623 16:08:43.951] I0623 14:05:56.414369    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:43.951] I0623 14:05:56.414388    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:43.952] W0623 14:05:57.414593    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.952] W0623 14:05:57.414655    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.952] W0623 14:05:58.415257    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.952] W0623 14:05:58.415321    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.952] W0623 14:05:59.415814    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.953] W0623 14:05:59.415871    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.953] W0623 14:06:00.416258    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.953] W0623 14:06:00.416309    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.953] W0623 14:06:01.417534    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.953] W0623 14:06:01.417585    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.954] W0623 14:06:02.418524    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.954] W0623 14:06:02.418578    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.954] W0623 14:06:03.419930    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.954] W0623 14:06:03.420691    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:43.954] I0623 14:06:04.421806    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:43.954] I0623 14:06:04.422567    2494 services.go:80] Kubelet started.
I0623 16:08:43.955] I0623 14:06:04.422591    2494 e2e_node_suite_test.go:207] Wait for the node to be ready
I0623 16:08:43.955] Jun 23 14:06:14.474: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
I0623 16:08:43.955] [sig-node] Density [Serial] [Slow] create a sequence of pods 
I0623 16:08:43.955]   latency/resource should be within limit when create 10 pods with 50 background pods
... skipping 163 lines ...
I0623 16:08:43.975]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0623 16:08:43.975] STEP: Collecting events from namespace "density-test-4812".
I0623 16:08:43.975] STEP: Found 4 events.
I0623 16:08:43.975] Jun 23 14:11:14.600: INFO: At 2021-06-23 14:06:15 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "google/cadvisor:latest" already present on machine
I0623 16:08:43.976] Jun 23 14:11:14.600: INFO: At 2021-06-23 14:06:15 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container cadvisor
I0623 16:08:43.976] Jun 23 14:11:14.600: INFO: At 2021-06-23 14:06:15 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container cadvisor
I0623 16:08:43.976] Jun 23 14:11:14.600: INFO: At 2021-06-23 14:06:17 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} BackOff: Back-off restarting failed container
I0623 16:08:43.976] Jun 23 14:11:14.607: INFO: POD       NODE                                                             PHASE    GRACE  CONDITIONS
I0623 16:08:43.977] Jun 23 14:11:14.608: INFO: cadvisor  n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:06:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:06:14 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:06:14 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:06:14 +0000 UTC  }]
I0623 16:08:43.977] Jun 23 14:11:14.608: INFO: 
I0623 16:08:43.977] Jun 23 14:11:14.610: INFO: 
I0623 16:08:43.977] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:43.984] Jun 23 14:11:14.612: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 100 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2021-06-23 14:06:14 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18833769646 0} {<nil>} 18833769646 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 14:06:14 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 14:06:14 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 14:06:14 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-23 14:06:14 +0000 UTC,LastTransitionTime:2021-06-23 14:06:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
I0623 16:08:43.984] Jun 23 14:11:14.613: INFO: 
I0623 16:08:43.984] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:43.985] Jun 23 14:11:14.614: INFO: 
I0623 16:08:43.985] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:43.985] Jun 23 14:11:14.628: INFO: cadvisor started at 2021-06-23 14:06:14 +0000 UTC (0+1 container statuses recorded)
I0623 16:08:43.985] Jun 23 14:11:14.628: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 12 lines ...
I0623 16:08:43.988] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
I0623 16:08:43.988]   create a sequence of pods [BeforeEach]
I0623 16:08:43.988]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:223
I0623 16:08:43.988]     latency/resource should be within limit when create 10 pods with 50 background pods
I0623 16:08:43.988]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:247
I0623 16:08:43.988] 
I0623 16:08:43.988]     Unexpected error:
I0623 16:08:43.989]         <*errors.errorString | 0xc00027ac30>: {
I0623 16:08:43.989]             s: "timed out waiting for the condition",
I0623 16:08:43.989]         }
I0623 16:08:43.989]         timed out waiting for the condition
I0623 16:08:43.989]     occurred
I0623 16:08:43.989] 
... skipping 47 lines ...
I0623 16:08:43.995] STEP: Collecting events from namespace "pidpressure-eviction-test-4274".
I0623 16:08:43.995] STEP: Found 0 events.
I0623 16:08:43.996] Jun 23 14:11:34.931: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0623 16:08:43.996] Jun 23 14:11:34.931: INFO: 
I0623 16:08:43.996] Jun 23 14:11:34.945: INFO: 
I0623 16:08:43.996] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.004] Jun 23 14:11:34.962: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 234 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 14:11:14 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 14:11:23 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 14:11:34 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 14:11:34 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 14:11:34 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 14:11:34 +0000 UTC,LastTransitionTime:2021-06-23 14:11:23 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:7ac3d030-3581-446b-95c1-f977a647951e,ResourceVersion:222,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:7ac3d030-3581-446b-95c1-f977a647951e,ResourceVersion:222,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.004] Jun 23 14:11:34.962: INFO: 
I0623 16:08:44.004] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.004] Jun 23 14:11:34.966: INFO: 
I0623 16:08:44.004] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.004] W0623 14:11:34.978216    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
I0623 16:08:44.005] W0623 14:11:34.978379    2494 metrics_grabber.go:107] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled.
... skipping 18 lines ...
I0623 16:08:44.008]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:407
I0623 16:08:44.008]     
I0623 16:08:44.008]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
I0623 16:08:44.008]       should eventually evict all of the correct pods [BeforeEach]
I0623 16:08:44.008]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
I0623 16:08:44.008] 
I0623 16:08:44.008]       Unexpected error:
I0623 16:08:44.008]           <*exec.ExitError | 0xc00133b2e0>: {
I0623 16:08:44.008]               ProcessState: {
I0623 16:08:44.009]                   pid: 4443,
I0623 16:08:44.009]                   status: 256,
I0623 16:08:44.009]                   rusage: {
I0623 16:08:44.009]                       Utime: {Sec: 0, Usec: 27605},
... skipping 63 lines ...
I0623 16:08:44.018] Jun 23 14:11:45.016: INFO: Skipping waiting for service account
I0623 16:08:44.018] [BeforeEach] Downward API tests for local ephemeral storage
I0623 16:08:44.018]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:37
I0623 16:08:44.018] [It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
I0623 16:08:44.019]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:41
I0623 16:08:44.019] STEP: Creating a pod to test downward api env vars
I0623 16:08:44.019] Jun 23 14:11:45.031: INFO: Waiting up to 5m0s for pod "downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2" in namespace "downward-api-2737" to be "Succeeded or Failed"
I0623 16:08:44.019] Jun 23 14:11:45.037: INFO: Pod "downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03645ms
I0623 16:08:44.019] Jun 23 14:11:47.040: INFO: Pod "downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008837562s
I0623 16:08:44.020] Jun 23 14:11:49.044: INFO: Pod "downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012211054s
I0623 16:08:44.020] STEP: Saw pod success
I0623 16:08:44.020] Jun 23 14:11:49.044: INFO: Pod "downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2" satisfied condition "Succeeded or Failed"
I0623 16:08:44.020] Jun 23 14:11:49.046: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 pod downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2 container dapi-container: <nil>
I0623 16:08:44.020] STEP: delete the pod
I0623 16:08:44.020] Jun 23 14:11:49.061: INFO: Waiting for pod downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2 to disappear
I0623 16:08:44.020] Jun 23 14:11:49.063: INFO: Pod downward-api-7e99f7c4-3612-442f-ac46-ddda484669b2 no longer exists
I0623 16:08:44.021] [AfterEach] [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage]
I0623 16:08:44.021]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 221 lines ...
I0623 16:08:44.048]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0623 16:08:44.048] STEP: Collecting events from namespace "density-test-3986".
I0623 16:08:44.048] STEP: Found 4 events.
I0623 16:08:44.049] Jun 23 14:16:49.134: INFO: At 2021-06-23 14:11:49 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "google/cadvisor:latest" already present on machine
I0623 16:08:44.049] Jun 23 14:16:49.134: INFO: At 2021-06-23 14:11:49 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container cadvisor
I0623 16:08:44.049] Jun 23 14:16:49.134: INFO: At 2021-06-23 14:11:49 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container cadvisor
I0623 16:08:44.049] Jun 23 14:16:49.134: INFO: At 2021-06-23 14:11:51 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} BackOff: Back-off restarting failed container
I0623 16:08:44.049] Jun 23 14:16:49.136: INFO: POD       NODE                                                             PHASE    GRACE  CONDITIONS
I0623 16:08:44.050] Jun 23 14:16:49.136: INFO: cadvisor  n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:11:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:12:38 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:12:38 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 14:11:49 +0000 UTC  }]
I0623 16:08:44.050] Jun 23 14:16:49.136: INFO: 
I0623 16:08:44.050] Jun 23 14:16:49.138: INFO: 
I0623 16:08:44.050] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.058] Jun 23 14:16:49.139: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 370 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 14:11:14 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 14:11:23 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18833769646 0} {<nil>} 18833769646 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 14:16:45 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 14:16:45 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 14:16:45 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-23 14:16:45 +0000 UTC,LastTransitionTime:2021-06-23 14:11:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:7ac3d030-3581-446b-95c1-f977a647951e,ResourceVersion:222,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-wmtvg,UID:7ac3d030-3581-446b-95c1-f977a647951e,ResourceVersion:222,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.058] Jun 23 14:16:49.139: INFO: 
I0623 16:08:44.058] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.058] Jun 23 14:16:49.141: INFO: 
I0623 16:08:44.058] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.058] Jun 23 14:16:49.149: INFO: cadvisor started at 2021-06-23 14:11:49 +0000 UTC (0+1 container statuses recorded)
I0623 16:08:44.058] Jun 23 14:16:49.149: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 12 lines ...
I0623 16:08:44.060] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
I0623 16:08:44.060]   create a batch of pods [BeforeEach]
I0623 16:08:44.060]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:74
I0623 16:08:44.060]     latency/resource should be within limit when create 10 pods with 0s interval
I0623 16:08:44.060]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:103
I0623 16:08:44.060] 
I0623 16:08:44.061]     Unexpected error:
I0623 16:08:44.061]         <*errors.errorString | 0xc00027ac30>: {
I0623 16:08:44.061]             s: "timed out waiting for the condition",
I0623 16:08:44.061]         }
I0623 16:08:44.061]         timed out waiting for the condition
I0623 16:08:44.061]     occurred
I0623 16:08:44.061] 
... skipping 11 lines ...
I0623 16:08:44.062] Jun 23 14:16:49.184: INFO: Skipping waiting for service account
I0623 16:08:44.063] [BeforeEach] Downward API tests for local ephemeral storage
I0623 16:08:44.063]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:37
I0623 16:08:44.063] [It] should provide default limits.ephemeral-storage from node allocatable
I0623 16:08:44.063]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:69
I0623 16:08:44.063] STEP: Creating a pod to test downward api env vars
I0623 16:08:44.063] Jun 23 14:16:49.192: INFO: Waiting up to 5m0s for pod "downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79" in namespace "downward-api-7421" to be "Succeeded or Failed"
I0623 16:08:44.063] Jun 23 14:16:49.195: INFO: Pod "downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79": Phase="Pending", Reason="", readiness=false. Elapsed: 3.0346ms
I0623 16:08:44.064] Jun 23 14:16:51.203: INFO: Pod "downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010547117s
I0623 16:08:44.064] Jun 23 14:16:53.206: INFO: Pod "downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01371454s
I0623 16:08:44.064] STEP: Saw pod success
I0623 16:08:44.064] Jun 23 14:16:53.206: INFO: Pod "downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79" satisfied condition "Succeeded or Failed"
I0623 16:08:44.064] Jun 23 14:16:53.208: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 pod downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79 container dapi-container: <nil>
I0623 16:08:44.064] STEP: delete the pod
I0623 16:08:44.064] Jun 23 14:16:53.223: INFO: Waiting for pod downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79 to disappear
I0623 16:08:44.065] Jun 23 14:16:53.224: INFO: Pod downward-api-7214bc87-421a-4a2b-b55c-6443cc109f79 no longer exists
I0623 16:08:44.065] [AfterEach] [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage]
I0623 16:08:44.065]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 108 lines ...
I0623 16:08:44.079] STEP: Creating a kubernetes client
I0623 16:08:44.079] STEP: Building a namespace api object, basename device-plugin-gpus-errors
I0623 16:08:44.079] Jun 23 14:41:55.021: INFO: Skipping waiting for service account
I0623 16:08:44.079] [BeforeEach] DevicePlugin
I0623 16:08:44.080]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:74
I0623 16:08:44.080] STEP: Ensuring that Nvidia GPUs exists on the node
I0623 16:08:44.080] Jun 23 14:41:55.032: INFO: check for nvidia GPUs failed. Got Error: exit status 1
I0623 16:08:44.080] [AfterEach] DevicePlugin
I0623 16:08:44.080]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:94
I0623 16:08:44.080] [AfterEach] [sig-node] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive]
I0623 16:08:44.080]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0623 16:08:44.080] Jun 23 14:41:55.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0623 16:08:44.080] STEP: Destroying namespace "device-plugin-gpus-errors-8372" for this suite.
... skipping 224 lines ...
I0623 16:08:44.144] Jun 23 14:42:09.272: INFO: At 2021-06-23 14:42:01 +0000 UTC - event for guaranteed: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Killing: Stopping container container
I0623 16:08:44.144] Jun 23 14:42:09.272: INFO: At 2021-06-23 14:42:07 +0000 UTC - event for best-effort: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Killing: Stopping container container
I0623 16:08:44.145] Jun 23 14:42:09.276: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0623 16:08:44.145] Jun 23 14:42:09.276: INFO: 
I0623 16:08:44.145] Jun 23 14:42:09.290: INFO: 
I0623 16:08:44.145] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.153] Jun 23 14:42:09.300: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 850 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 14:41:31 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 14:41:42 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-9nthb,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 14:42:06 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 14:42:06 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 14:42:06 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 14:42:06 +0000 UTC,LastTransitionTime:2021-06-23 14:42:06 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-9nthb,UID:2458efd4-98db-493b-95d8-a28bfb7a21a5,ResourceVersion:823,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-9nthb,UID:2458efd4-98db-493b-95d8-a28bfb7a21a5,ResourceVersion:823,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.153] Jun 23 14:42:09.301: INFO: 
I0623 16:08:44.153] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.154] Jun 23 14:42:09.309: INFO: 
I0623 16:08:44.154] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.154] Jun 23 14:42:09.352: INFO: static-critical-pod started at 2021-06-23 14:42:01 +0000 UTC (0+1 container statuses recorded)
I0623 16:08:44.154] Jun 23 14:42:09.352: INFO: 	Container container ready: true, restart count 0
... skipping 15 lines ...
I0623 16:08:44.157] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
I0623 16:08:44.157]   when we need to admit a critical pod
I0623 16:08:44.157]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:46
I0623 16:08:44.157]     should be able to create and delete a critical pod [It]
I0623 16:08:44.157]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/critical_pod_test.go:53
I0623 16:08:44.157] 
I0623 16:08:44.157]     Unexpected error:
I0623 16:08:44.157]         <*errors.StatusError | 0xc0004d66e0>: {
I0623 16:08:44.158]             ErrStatus: {
I0623 16:08:44.158]                 TypeMeta: {Kind: "", APIVersion: ""},
I0623 16:08:44.158]                 ListMeta: {
I0623 16:08:44.158]                     SelfLink: "",
I0623 16:08:44.158]                     ResourceVersion: "",
... skipping 124 lines ...
I0623 16:08:44.173] I0623 14:56:01.388249    2494 util.go:247] new configuration has taken effect
I0623 16:08:44.173] STEP: Found 0 events.
I0623 16:08:44.173] Jun 23 14:56:01.391: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0623 16:08:44.173] Jun 23 14:56:01.391: INFO: 
I0623 16:08:44.173] Jun 23 14:56:01.393: INFO: 
I0623 16:08:44.173] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.181] Jun 23 14:56:01.395: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 1160 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 14:55:14 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 14:55:25 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-hsmwz,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 14:56:01 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 14:56:01 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 14:56:01 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 14:56:01 +0000 UTC,LastTransitionTime:2021-06-23 14:56:01 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-hsmwz,UID:fa535109-4152-4edf-a689-563de7b21bde,ResourceVersion:1146,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-hsmwz,UID:fa535109-4152-4edf-a689-563de7b21bde,ResourceVersion:1146,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.181] Jun 23 14:56:01.395: INFO: 
I0623 16:08:44.181] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.181] Jun 23 14:56:01.397: INFO: 
I0623 16:08:44.181] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.181] W0623 14:56:01.403607    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
I0623 16:08:44.182] W0623 14:56:01.403624    2494 metrics_grabber.go:107] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled.
... skipping 17 lines ...
I0623 16:08:44.184]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:140
I0623 16:08:44.184]     
I0623 16:08:44.185]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
I0623 16:08:44.185]       should eventually evict all of the correct pods [BeforeEach]
I0623 16:08:44.185]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
I0623 16:08:44.185] 
I0623 16:08:44.185]       Unexpected error:
I0623 16:08:44.185]           <*exec.ExitError | 0xc00027dc60>: {
I0623 16:08:44.185]               ProcessState: {
I0623 16:08:44.185]                   pid: 11576,
I0623 16:08:44.185]                   status: 256,
I0623 16:08:44.185]                   rusage: {
I0623 16:08:44.185]                       Utime: {Sec: 0, Usec: 32377},
... skipping 195 lines ...
I0623 16:08:44.208] I0623 15:09:19.773828    2494 util.go:247] new configuration has taken effect
I0623 16:08:44.208] STEP: Found 0 events.
I0623 16:08:44.208] Jun 23 15:09:19.777: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0623 16:08:44.208] Jun 23 15:09:19.777: INFO: 
I0623 16:08:44.208] Jun 23 15:09:19.779: INFO: 
I0623 16:08:44.208] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.216] Jun 23 15:09:19.781: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 1444 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:08:40 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:08:53 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ghlk6,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:09:17 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:09:17 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:09:17 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:09:17 +0000 UTC,LastTransitionTime:2021-06-23 15:09:17 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ghlk6,UID:e3fa0682-b24e-4bf9-bc07-8405021bde4f,ResourceVersion:1429,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ghlk6,UID:e3fa0682-b24e-4bf9-bc07-8405021bde4f,ResourceVersion:1429,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.216] Jun 23 15:09:19.781: INFO: 
I0623 16:08:44.216] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.216] Jun 23 15:09:19.783: INFO: 
I0623 16:08:44.216] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.217] W0623 15:09:19.789368    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
I0623 16:08:44.217] W0623 15:09:19.789399    2494 metrics_grabber.go:107] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled.
... skipping 16 lines ...
I0623 16:08:44.221]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:236
I0623 16:08:44.221]     
I0623 16:08:44.221]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
I0623 16:08:44.221]       should eventually evict all of the correct pods [BeforeEach]
I0623 16:08:44.221]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
I0623 16:08:44.222] 
I0623 16:08:44.222]       Unexpected error:
I0623 16:08:44.222]           <*exec.ExitError | 0xc000a3ce20>: {
I0623 16:08:44.222]               ProcessState: {
I0623 16:08:44.222]                   pid: 13746,
I0623 16:08:44.222]                   status: 256,
I0623 16:08:44.222]                   rusage: {
I0623 16:08:44.223]                       Utime: {Sec: 0, Usec: 29520},
... skipping 67 lines ...
I0623 16:08:44.234] [AfterEach] [sig-node] Container Manager Misc [Serial]
I0623 16:08:44.234]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0623 16:08:44.234] Jun 23 15:09:27.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0623 16:08:44.234] STEP: Destroying namespace "kubelet-container-manager-5977" for this suite.
I0623 16:08:44.235] •SSSSSSSSSSSSS
I0623 16:08:44.235] ------------------------------
I0623 16:08:44.235] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  delete and recreate ConfigMap: error while ConfigMap is absent: 
I0623 16:08:44.235]   status and events should match expectations
I0623 16:08:44.235]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:784
I0623 16:08:44.235] [BeforeEach] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]
I0623 16:08:44.236]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0623 16:08:44.236] STEP: Creating a kubernetes client
I0623 16:08:44.236] STEP: Building a namespace api object, basename dynamic-kubelet-configuration-test
... skipping 36 lines ...
I0623 16:08:44.244] 
I0623 16:08:44.244] • [SLOW TEST:71.601 seconds]
I0623 16:08:44.244] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]
I0623 16:08:44.244] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
I0623 16:08:44.244]   
I0623 16:08:44.244]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:81
I0623 16:08:44.244]     delete and recreate ConfigMap: error while ConfigMap is absent:
I0623 16:08:44.245]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:783
I0623 16:08:44.245]       status and events should match expectations
I0623 16:08:44.245]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:784
I0623 16:08:44.245] ------------------------------
I0623 16:08:44.245] [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing 
I0623 16:08:44.246]   should complete pod sandbox clean up based on the information in sandbox checkpoint
... skipping 194 lines ...
I0623 16:08:44.274]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0623 16:08:44.274] STEP: Collecting events from namespace "resource-usage-8124".
I0623 16:08:44.274] STEP: Found 4 events.
I0623 16:08:44.274] Jun 23 15:15:39.615: INFO: At 2021-06-23 15:10:40 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "google/cadvisor:latest" already present on machine
I0623 16:08:44.275] Jun 23 15:15:39.615: INFO: At 2021-06-23 15:10:40 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container cadvisor
I0623 16:08:44.275] Jun 23 15:15:39.615: INFO: At 2021-06-23 15:10:40 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container cadvisor
I0623 16:08:44.275] Jun 23 15:15:39.615: INFO: At 2021-06-23 15:10:41 +0000 UTC - event for cadvisor: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} BackOff: Back-off restarting failed container
I0623 16:08:44.275] Jun 23 15:15:39.617: INFO: POD       NODE                                                             PHASE    GRACE  CONDITIONS
I0623 16:08:44.276] Jun 23 15:15:39.617: INFO: cadvisor  n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:10:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:10:39 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:10:39 +0000 UTC ContainersNotReady containers with unready status: [cadvisor]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:10:39 +0000 UTC  }]
I0623 16:08:44.276] Jun 23 15:15:39.617: INFO: 
I0623 16:08:44.276] Jun 23 15:15:39.619: INFO: 
I0623 16:08:44.276] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.283] Jun 23 15:15:39.620: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 1546 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:10:16 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:10:28 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qmcgm,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18833769646 0} {<nil>} 18833769646 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:10:39 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:10:39 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:10:39 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-06-23 15:10:39 +0000 UTC,LastTransitionTime:2021-06-23 15:10:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qmcgm,UID:0a294f53-6162-42e6-9112-17b7f7430e32,ResourceVersion:410,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qmcgm,UID:0a294f53-6162-42e6-9112-17b7f7430e32,ResourceVersion:410,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.284] Jun 23 15:15:39.621: INFO: 
I0623 16:08:44.284] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.284] Jun 23 15:15:39.622: INFO: 
I0623 16:08:44.284] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.284] Jun 23 15:15:39.634: INFO: cadvisor started at 2021-06-23 15:10:39 +0000 UTC (0+1 container statuses recorded)
I0623 16:08:44.284] Jun 23 15:15:39.634: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 10 lines ...
I0623 16:08:44.286]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:59
I0623 16:08:44.286] W0623 15:15:39.663099    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
I0623 16:08:44.286] W0623 15:15:39.663273    2494 metrics_grabber.go:107] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled.
I0623 16:08:44.286] W0623 15:15:39.663347    2494 metrics_grabber.go:111] Can't find kube-controller-manager pod. Grabbing metrics from kube-controller-manager is disabled.
I0623 16:08:44.286] W0623 15:15:39.663419    2494 metrics_grabber.go:115] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
I0623 16:08:44.286] W0623 15:15:39.663475    2494 metrics_grabber.go:118] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
I0623 16:08:44.287] Jun 23 15:15:39.681: INFO: runtime operation error metrics:
I0623 16:08:44.287] node "n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8" runtime operation error rate:
I0623 16:08:44.287] 
I0623 16:08:44.287] 
I0623 16:08:44.287] 
I0623 16:08:44.287] • Failure in Spec Setup (BeforeEach) [300.110 seconds]
I0623 16:08:44.287] [sig-node] Resource-usage [Serial] [Slow]
I0623 16:08:44.287] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
I0623 16:08:44.287]   regular resource usage tracking [BeforeEach]
I0623 16:08:44.287]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:67
I0623 16:08:44.288]     resource tracking for 10 pods per node
I0623 16:08:44.288]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:85
I0623 16:08:44.288] 
I0623 16:08:44.288]     Unexpected error:
I0623 16:08:44.288]         <*errors.errorString | 0xc00027ac30>: {
I0623 16:08:44.288]             s: "timed out waiting for the condition",
I0623 16:08:44.288]         }
I0623 16:08:44.288]         timed out waiting for the condition
I0623 16:08:44.288]     occurred
I0623 16:08:44.288] 
... skipping 47 lines ...
I0623 16:08:44.295] I0623 15:16:05.042395    2494 util.go:247] new configuration has taken effect
I0623 16:08:44.295] STEP: Found 0 events.
I0623 16:08:44.295] Jun 23 15:16:05.047: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0623 16:08:44.295] Jun 23 15:16:05.047: INFO: 
I0623 16:08:44.296] Jun 23 15:16:05.049: INFO: 
I0623 16:08:44.296] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.303] Jun 23 15:16:05.051: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 1703 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:10:16 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:10:28 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gs4xr,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:16:00 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:16:00 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:16:00 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:16:00 +0000 UTC,LastTransitionTime:2021-06-23 15:16:00 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gs4xr,UID:87904b10-eb33-49f7-b4c5-cc40598d20c9,ResourceVersion:1690,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gs4xr,UID:87904b10-eb33-49f7-b4c5-cc40598d20c9,ResourceVersion:1690,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.303] Jun 23 15:16:05.051: INFO: 
I0623 16:08:44.303] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.304] Jun 23 15:16:05.053: INFO: 
I0623 16:08:44.304] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.304] Jun 23 15:16:05.056: INFO: cadvisor started at 2021-06-23 15:10:39 +0000 UTC (0+1 container statuses recorded)
I0623 16:08:44.304] Jun 23 15:16:05.056: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 17 lines ...
I0623 16:08:44.307]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/quota_lsci_test.go:57
I0623 16:08:44.307]     
I0623 16:08:44.307]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
I0623 16:08:44.307]       should eventually evict all of the correct pods [BeforeEach]
I0623 16:08:44.307]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
I0623 16:08:44.307] 
I0623 16:08:44.307]       Unexpected error:
I0623 16:08:44.307]           <*exec.ExitError | 0xc000a077c0>: {
I0623 16:08:44.307]               ProcessState: {
I0623 16:08:44.307]                   pid: 15683,
I0623 16:08:44.308]                   status: 256,
I0623 16:08:44.308]                   rusage: {
I0623 16:08:44.308]                       Utime: {Sec: 0, Usec: 26018},
... skipping 35 lines ...
I0623 16:08:44.311]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0623 16:08:44.311] Jun 23 15:16:11.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0623 16:08:44.311] STEP: Destroying namespace "kubelet-container-manager-5122" for this suite.
I0623 16:08:44.311] •
I0623 16:08:44.312] ------------------------------
I0623 16:08:44.312] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system 
I0623 16:08:44.312]   should return the expected error with the feature gate disabled
I0623 16:08:44.312]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:634
I0623 16:08:44.312] [BeforeEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
I0623 16:08:44.312]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0623 16:08:44.312] STEP: Creating a kubernetes client
I0623 16:08:44.312] STEP: Building a namespace api object, basename podresources-test
I0623 16:08:44.313] Jun 23 15:16:11.153: INFO: Skipping waiting for service account
I0623 16:08:44.313] [It] should return the expected error with the feature gate disabled
I0623 16:08:44.313]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:634
I0623 16:08:44.313] STEP: checking GetAllocatableResources fail if the feature gate is not enabled
I0623 16:08:44.313] [AfterEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
I0623 16:08:44.313]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0623 16:08:44.313] Jun 23 15:16:11.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0623 16:08:44.313] STEP: Destroying namespace "podresources-test-8087" for this suite.
I0623 16:08:44.314] •SSSSSSS
I0623 16:08:44.314] ------------------------------
... skipping 642 lines ...
I0623 16:08:44.391] I0623 15:37:04.069659    2494 util.go:247] new configuration has taken effect
I0623 16:08:44.391] STEP: Found 0 events.
I0623 16:08:44.391] Jun 23 15:37:04.073: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0623 16:08:44.391] Jun 23 15:37:04.073: INFO: 
I0623 16:08:44.391] Jun 23 15:37:04.075: INFO: 
I0623 16:08:44.391] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.398] Jun 23 15:37:04.076: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 3269 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:36:27 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:36:38 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-7gz2d,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:02 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:02 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:02 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:37:02 +0000 UTC,LastTransitionTime:2021-06-23 15:37:02 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-7gz2d,UID:2c95a9ec-aed1-43f4-be58-d0c0a433ce15,ResourceVersion:3256,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-7gz2d,UID:2c95a9ec-aed1-43f4-be58-d0c0a433ce15,ResourceVersion:3256,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.399] Jun 23 15:37:04.077: INFO: 
I0623 16:08:44.399] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.399] Jun 23 15:37:04.078: INFO: 
I0623 16:08:44.399] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.399] Jun 23 15:37:04.081: INFO: cadvisor started at 2021-06-23 15:10:39 +0000 UTC (0+1 container statuses recorded)
I0623 16:08:44.399] Jun 23 15:37:04.081: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 19 lines ...
I0623 16:08:44.402]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:173
I0623 16:08:44.402]     
I0623 16:08:44.402]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
I0623 16:08:44.403]       should eventually evict all of the correct pods [BeforeEach]
I0623 16:08:44.403]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
I0623 16:08:44.403] 
I0623 16:08:44.403]       Unexpected error:
I0623 16:08:44.403]           <*exec.ExitError | 0xc000aa44e0>: {
I0623 16:08:44.403]               ProcessState: {
I0623 16:08:44.403]                   pid: 31246,
I0623 16:08:44.403]                   status: 256,
I0623 16:08:44.403]                   rusage: {
I0623 16:08:44.403]                       Utime: {Sec: 0, Usec: 30482},
... skipping 164 lines ...
I0623 16:08:44.423] STEP: Collecting events from namespace "priority-disk-eviction-ordering-test-1480".
I0623 16:08:44.423] STEP: Found 0 events.
I0623 16:08:44.423] Jun 23 15:37:39.517: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0623 16:08:44.423] Jun 23 15:37:39.517: INFO: 
I0623 16:08:44.423] Jun 23 15:37:39.519: INFO: 
I0623 16:08:44.423] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.430] Jun 23 15:37:39.521: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 3321 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:36:27 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:36:38 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-bw5z4,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:37:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:37:38 +0000 UTC,LastTransitionTime:2021-06-23 15:37:38 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-bw5z4,UID:e6b60c51-8c82-4256-9108-5d48580ea138,ResourceVersion:3308,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-bw5z4,UID:e6b60c51-8c82-4256-9108-5d48580ea138,ResourceVersion:3308,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.431] Jun 23 15:37:39.521: INFO: 
I0623 16:08:44.431] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.431] Jun 23 15:37:39.522: INFO: 
I0623 16:08:44.431] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.431] Jun 23 15:37:39.525: INFO: cadvisor started at 2021-06-23 15:10:39 +0000 UTC (0+1 container statuses recorded)
I0623 16:08:44.431] Jun 23 15:37:39.525: INFO: 	Container cadvisor ready: false, restart count 5
... skipping 19 lines ...
I0623 16:08:44.434]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:351
I0623 16:08:44.434]     
I0623 16:08:44.435]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
I0623 16:08:44.435]       should eventually evict all of the correct pods [BeforeEach]
I0623 16:08:44.435]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
I0623 16:08:44.435] 
I0623 16:08:44.435]       Unexpected error:
I0623 16:08:44.435]           <*exec.ExitError | 0xc000a3d8c0>: {
I0623 16:08:44.435]               ProcessState: {
I0623 16:08:44.435]                   pid: 31562,
I0623 16:08:44.435]                   status: 256,
I0623 16:08:44.435]                   rusage: {
I0623 16:08:44.435]                       Utime: {Sec: 0, Usec: 26703},
... skipping 176 lines ...
I0623 16:08:44.456] STEP: Building a namespace api object, basename topology-manager-test
I0623 16:08:44.456] Jun 23 15:38:27.647: INFO: Skipping waiting for service account
I0623 16:08:44.456] [It] run Topology Manager policy test suite
I0623 16:08:44.456]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:888
I0623 16:08:44.456] STEP: by configuring Topology Manager policy to single-numa-node
I0623 16:08:44.456] Jun 23 15:38:27.653: INFO: Configuring topology Manager policy to single-numa-node
I0623 16:08:44.456] Jun 23 15:38:27.656: INFO: failed to find any VF device from [{0000:00:00.0 -1 false false} {0000:00:01.0 -1 false false} {0000:00:01.3 -1 false false} {0000:00:03.0 -1 false false} {0000:00:04.0 -1 false false} {0000:00:05.0 -1 false false}]
I0623 16:08:44.458] Jun 23 15:38:27.656: INFO: New kubelet config is {{ } %!s(bool=true) /tmp/node-e2e-20210623T140232/static-pods515999671 {1m0s} {10s} {20s}  map[] 0.0.0.0 %!s(int32=10250) %!s(int32=10255) /usr/libexec/kubernetes/kubelet-plugins/volume/exec/  /var/lib/kubelet/pki/kubelet.crt /var/lib/kubelet/pki/kubelet.key []  %!s(bool=false) %!s(bool=false) {{} {%!s(bool=false) {2m0s}} {%!s(bool=true)}} {AlwaysAllow {{5m0s} {30s}}} %!s(int32=5) %!s(int32=10) %!s(int32=5) %!s(int32=10) %!s(bool=true) %!s(bool=false) %!s(int32=10248) 127.0.0.1 %!s(int32=-999)  [] {4h0m0s} {10s} {5m0s} %!s(int32=40) {2m0s} %!s(int32=85) %!s(int32=80) {10s} /system.slice/kubelet.service  / %!s(bool=true) systemd static {1s} None single-numa-node container map[] {2m0s} promiscuous-bridge %!s(int32=110) 10.100.0.0/24 %!s(int64=-1) /etc/resolv.conf %!s(bool=false) %!s(bool=true) {100ms} %!s(int64=1000000) %!s(int32=50) application/vnd.kubernetes.protobuf %!s(int32=5) %!s(int32=10) %!s(bool=false) map[memory.available:250Mi nodefs.available:10% nodefs.inodesFree:5%] map[] map[] {30s} %!s(int32=0) map[nodefs.available:5% nodefs.inodesFree:5%] %!s(int32=0) %!s(bool=true) %!s(bool=false) %!s(bool=true) %!s(int32=14) %!s(int32=15) map[CPUManager:%!s(bool=true) DynamicKubeletConfig:%!s(bool=true) LocalStorageCapacityIsolation:%!s(bool=true) TopologyManager:%!s(bool=true)] %!s(bool=true) 10Mi %!s(int32=5) Watch [] %!s(bool=false) map[] map[cpu:200m]   [pods]   {text %!s(bool=false)} %!s(bool=true) {0s} {0s} [] %!s(bool=true) %!s(bool=true)}
I0623 16:08:44.458] I0623 15:38:32.130456    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.458] I0623 15:38:32.181912    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.458] I0623 15:38:32.181936    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.458] I0623 15:38:32.774657    2494 util.go:247] new configuration has taken effect
I0623 16:08:44.458] STEP: running a non-Gu pod
... skipping 6 lines ...
I0623 16:08:44.459] Jun 23 15:38:36.856: INFO: Waiting for pod non-gu-pod to disappear
I0623 16:08:44.459] Jun 23 15:38:36.860: INFO: Pod non-gu-pod no longer exists
I0623 16:08:44.459] I0623 15:38:36.860930    2494 remote_runtime.go:54] "Connecting to runtime service" endpoint="unix:///var/run/crio/crio.sock"
I0623 16:08:44.460] I0623 15:38:36.861017    2494 remote_image.go:41] "Connecting to image service" endpoint="unix:///var/run/crio/crio.sock"
I0623 16:08:44.460] STEP: running a Gu pod
I0623 16:08:44.460] Jun 23 15:39:32.937: INFO: The status of Pod gu-pod is Pending, waiting for it to be Running (with Ready = true)
I0623 16:08:44.460] Jun 23 15:39:34.941: INFO: The status of Pod gu-pod is Failed which is unexpected
I0623 16:08:44.460] [AfterEach] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests
I0623 16:08:44.460]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:969
I0623 16:08:44.460] I0623 15:39:37.317392    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.460] I0623 15:39:37.369469    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.461] I0623 15:39:37.369494    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.461] I0623 15:39:38.370829    2494 server.go:182] Initial health check passed for service "kubelet"
... skipping 5 lines ...
I0623 16:08:44.462] Jun 23 15:39:40.005: INFO: At 2021-06-23 15:38:35 +0000 UTC - event for non-gu-pod: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
I0623 16:08:44.462] Jun 23 15:39:40.005: INFO: At 2021-06-23 15:38:35 +0000 UTC - event for non-gu-pod: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container non-gu-container
I0623 16:08:44.462] Jun 23 15:39:40.005: INFO: At 2021-06-23 15:38:35 +0000 UTC - event for non-gu-pod: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container non-gu-container
I0623 16:08:44.462] Jun 23 15:39:40.005: INFO: At 2021-06-23 15:38:36 +0000 UTC - event for non-gu-pod: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Killing: Stopping container non-gu-container
I0623 16:08:44.462] Jun 23 15:39:40.005: INFO: At 2021-06-23 15:39:32 +0000 UTC - event for gu-pod: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} TopologyAffinityError: Resources cannot be allocated with Topology locality
I0623 16:08:44.463] Jun 23 15:39:40.007: INFO: POD     NODE                                                             PHASE   GRACE  CONDITIONS
I0623 16:08:44.463] Jun 23 15:39:40.007: INFO: gu-pod  n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8  Failed         []
I0623 16:08:44.463] Jun 23 15:39:40.007: INFO: 
I0623 16:08:44.463] Jun 23 15:39:40.009: INFO: 
I0623 16:08:44.463] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.470] Jun 23 15:39:40.011: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 3416 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:36:27 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:38:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-p82t9,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:39:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:39:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:39:38 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:39:38 +0000 UTC,LastTransitionTime:2021-06-23 15:39:38 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-p82t9,UID:db7809e3-be30-4128-be16-3c884bee290a,ResourceVersion:3406,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-p82t9,UID:db7809e3-be30-4128-be16-3c884bee290a,ResourceVersion:3406,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.471] Jun 23 15:39:40.011: INFO: 
I0623 16:08:44.471] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.471] Jun 23 15:39:40.013: INFO: 
I0623 16:08:44.471] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.471] Jun 23 15:39:40.016: INFO: gu-pod started at 2021-06-23 15:39:32 +0000 UTC (0+0 container statuses recorded)
I0623 16:08:44.471] W0623 15:39:40.017971    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
... skipping 16 lines ...
I0623 16:08:44.474] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
I0623 16:08:44.474]   With kubeconfig updated to static CPU Manager policy run the Topology Manager tests
I0623 16:08:44.474]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:979
I0623 16:08:44.474]     run Topology Manager policy test suite [It]
I0623 16:08:44.474]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:888
I0623 16:08:44.474] 
I0623 16:08:44.475]     Unexpected error:
I0623 16:08:44.475]         <*errors.errorString | 0xc0003bc560>: {
I0623 16:08:44.475]             s: "pod ran to completion",
I0623 16:08:44.475]         }
I0623 16:08:44.475]         pod ran to completion
I0623 16:08:44.475]     occurred
I0623 16:08:44.475] 
... skipping 43 lines ...
I0623 16:08:44.481] I0623 15:40:15.438860    2494 util.go:247] new configuration has taken effect
I0623 16:08:44.481] STEP: Found 0 events.
I0623 16:08:44.481] Jun 23 15:40:15.444: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0623 16:08:44.481] Jun 23 15:40:15.444: INFO: 
I0623 16:08:44.481] Jun 23 15:40:15.446: INFO: 
I0623 16:08:44.481] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.489] Jun 23 15:40:15.448: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 3465 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:36:27 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:38:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-m7nzh,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:40:11 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:40:11 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:40:11 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:40:11 +0000 UTC,LastTransitionTime:2021-06-23 15:40:11 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-m7nzh,UID:48090160-6a8f-4ceb-bdb0-e1c28116a536,ResourceVersion:3450,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-m7nzh,UID:48090160-6a8f-4ceb-bdb0-e1c28116a536,ResourceVersion:3450,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.489] Jun 23 15:40:15.449: INFO: 
I0623 16:08:44.489] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.489] Jun 23 15:40:15.450: INFO: 
I0623 16:08:44.489] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.489] W0623 15:40:15.456709    2494 metrics_grabber.go:89] Can't find any pods in namespace kube-system to grab metrics from
I0623 16:08:44.490] W0623 15:40:15.456729    2494 metrics_grabber.go:107] Can't find kube-scheduler pod. Grabbing metrics from kube-scheduler is disabled.
... skipping 15 lines ...
I0623 16:08:44.492]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/quota_lsci_test.go:57
I0623 16:08:44.492]     
I0623 16:08:44.492]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
I0623 16:08:44.492]       should eventually evict all of the correct pods [BeforeEach]
I0623 16:08:44.492]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
I0623 16:08:44.493] 
I0623 16:08:44.493]       Unexpected error:
I0623 16:08:44.493]           <*exec.ExitError | 0xc000465140>: {
I0623 16:08:44.493]               ProcessState: {
I0623 16:08:44.493]                   pid: 32708,
I0623 16:08:44.493]                   status: 256,
I0623 16:08:44.493]                   rusage: {
I0623 16:08:44.493]                       Utime: {Sec: 0, Usec: 25319},
... skipping 65 lines ...
I0623 16:08:44.502] STEP: back to "Node.Spec.ConfigSource is nil" from "correct"
I0623 16:08:44.502] I0623 15:40:36.678220    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.502] I0623 15:40:47.694442    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.502] I0623 15:40:47.739261    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.503] I0623 15:40:47.739283    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.503] I0623 15:40:48.740326    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.503] STEP: from "Node.Spec.ConfigSource is nil" to "fail-parse"
I0623 16:08:44.503] I0623 15:41:00.759225    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.503] I0623 15:41:00.805658    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.504] I0623 15:41:00.805683    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.504] I0623 15:41:01.806973    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.504] STEP: back to "Node.Spec.ConfigSource is nil" from "fail-parse"
I0623 16:08:44.504] I0623 15:41:11.823462    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.504] I0623 15:41:11.868017    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.505] I0623 15:41:11.868040    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.505] I0623 15:41:12.869595    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.505] STEP: from "Node.Spec.ConfigSource is nil" to "fail-validate"
I0623 16:08:44.505] I0623 15:41:23.887342    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.505] I0623 15:41:23.931941    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.506] I0623 15:41:23.931964    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.506] I0623 15:41:24.935957    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.506] STEP: back to "Node.Spec.ConfigSource is nil" from "fail-validate"
I0623 16:08:44.506] I0623 15:41:35.954868    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.506] I0623 15:41:35.998968    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.507] I0623 15:41:35.998989    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.507] STEP: setting initial state "Node.Spec.ConfigSource has all nil subfields"
I0623 16:08:44.507] STEP: from "Node.Spec.ConfigSource has all nil subfields" to "Node.Spec.ConfigSource.ConfigMap is missing namespace"
I0623 16:08:44.507] STEP: back to "Node.Spec.ConfigSource has all nil subfields" from "Node.Spec.ConfigSource.ConfigMap is missing namespace"
... skipping 15 lines ...
I0623 16:08:44.511] STEP: from "Node.Spec.ConfigSource has all nil subfields" to "correct"
I0623 16:08:44.511] I0623 15:41:48.018768    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.512] I0623 15:41:48.062950    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.512] I0623 15:41:48.062975    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.512] STEP: back to "Node.Spec.ConfigSource has all nil subfields" from "correct"
I0623 16:08:44.512] I0623 15:41:49.069180    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.512] STEP: from "Node.Spec.ConfigSource has all nil subfields" to "fail-parse"
I0623 16:08:44.512] I0623 15:42:00.086445    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.512] I0623 15:42:00.138998    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.513] I0623 15:42:00.139016    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.513] I0623 15:42:01.153508    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.513] STEP: back to "Node.Spec.ConfigSource has all nil subfields" from "fail-parse"
I0623 16:08:44.513] STEP: from "Node.Spec.ConfigSource has all nil subfields" to "fail-validate"
I0623 16:08:44.513] I0623 15:42:12.168770    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.513] I0623 15:42:12.212929    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.513] I0623 15:42:12.212951    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.513] I0623 15:42:13.214126    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.514] STEP: back to "Node.Spec.ConfigSource has all nil subfields" from "fail-validate"
I0623 16:08:44.514] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap is missing namespace"
I0623 16:08:44.514] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "Node.Spec.ConfigSource.ConfigMap is missing name"
I0623 16:08:44.514] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing namespace" from "Node.Spec.ConfigSource.ConfigMap is missing name"
I0623 16:08:44.514] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey"
I0623 16:08:44.514] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing namespace" from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey"
I0623 16:08:44.515] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
... skipping 9 lines ...
I0623 16:08:44.516] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "correct"
I0623 16:08:44.516] I0623 15:42:24.231215    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.516] I0623 15:42:24.283940    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.517] I0623 15:42:24.283964    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.517] I0623 15:42:25.285222    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.517] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing namespace" from "correct"
I0623 16:08:44.517] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "fail-parse"
I0623 16:08:44.517] I0623 15:42:37.303885    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.517] I0623 15:42:37.348355    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.517] I0623 15:42:37.348379    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.518] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing namespace" from "fail-parse"
I0623 16:08:44.518] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing namespace" to "fail-validate"
I0623 16:08:44.518] I0623 15:42:38.350676    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.518] I0623 15:42:49.366336    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.518] I0623 15:42:49.416154    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.518] I0623 15:42:49.416172    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.518] I0623 15:42:50.427911    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.518] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing namespace" from "fail-validate"
I0623 16:08:44.519] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap is missing name"
I0623 16:08:44.519] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey"
I0623 16:08:44.519] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing name" from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey"
I0623 16:08:44.519] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
I0623 16:08:44.519] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing name" from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
I0623 16:08:44.519] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
... skipping 7 lines ...
I0623 16:08:44.521] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "correct"
I0623 16:08:44.521] I0623 15:43:01.444877    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.521] I0623 15:43:01.490614    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.521] I0623 15:43:01.490638    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.521] I0623 15:43:02.491865    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.521] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing name" from "correct"
I0623 16:08:44.521] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "fail-parse"
I0623 16:08:44.522] I0623 15:43:12.507178    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.522] I0623 15:43:12.551053    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.522] I0623 15:43:12.551234    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.522] I0623 15:43:13.552634    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.522] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing name" from "fail-parse"
I0623 16:08:44.522] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing name" to "fail-validate"
I0623 16:08:44.522] I0623 15:43:24.570037    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.522] I0623 15:43:24.616614    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.523] I0623 15:43:24.616647    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.523] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing name" from "fail-validate"
I0623 16:08:44.523] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey"
I0623 16:08:44.523] I0623 15:43:25.623155    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.523] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
I0623 16:08:44.523] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
I0623 16:08:44.523] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
I0623 16:08:44.524] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
... skipping 6 lines ...
I0623 16:08:44.525] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" to "correct"
I0623 16:08:44.525] I0623 15:43:36.639515    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.525] I0623 15:43:36.683952    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.525] I0623 15:43:36.683974    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.525] I0623 15:43:37.685101    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.525] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" from "correct"
I0623 16:08:44.526] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" to "fail-parse"
I0623 16:08:44.526] I0623 15:43:48.703223    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.526] I0623 15:43:48.747572    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.526] I0623 15:43:48.747595    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.526] I0623 15:43:49.748887    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.526] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" from "fail-parse"
I0623 16:08:44.526] STEP: from "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" to "fail-validate"
I0623 16:08:44.526] I0623 15:44:00.763872    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.527] I0623 15:44:00.807978    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.527] I0623 15:44:00.808002    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.527] STEP: back to "Node.Spec.ConfigSource.ConfigMap is missing kubeletConfigKey" from "fail-validate"
I0623 16:08:44.527] I0623 15:44:01.820144    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.527] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified"
I0623 16:08:44.527] STEP: from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
I0623 16:08:44.527] STEP: back to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
I0623 16:08:44.530] STEP: from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" to "Node.Spec.ConfigSource.ConfigMap has invalid namespace"
I0623 16:08:44.531] STEP: back to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" from "Node.Spec.ConfigSource.ConfigMap has invalid namespace"
... skipping 4 lines ...
I0623 16:08:44.533] STEP: from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" to "correct"
I0623 16:08:44.533] I0623 15:44:12.835002    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.533] I0623 15:44:12.880215    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.534] I0623 15:44:12.880240    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.534] I0623 15:44:13.881360    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.534] STEP: back to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" from "correct"
I0623 16:08:44.534] STEP: from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" to "fail-parse"
I0623 16:08:44.535] I0623 15:44:23.896943    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.535] I0623 15:44:23.941958    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.535] I0623 15:44:23.941982    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.535] I0623 15:44:24.943143    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.536] STEP: back to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" from "fail-parse"
I0623 16:08:44.536] STEP: from "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" to "fail-validate"
I0623 16:08:44.536] I0623 15:44:34.959721    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.536] I0623 15:44:35.003934    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.536] I0623 15:44:35.003957    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.537] STEP: back to "Node.Spec.ConfigSource.ConfigMap.UID is illegally specified" from "fail-validate"
I0623 16:08:44.537] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified"
I0623 16:08:44.537] STEP: from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" to "Node.Spec.ConfigSource.ConfigMap has invalid namespace"
I0623 16:08:44.538] STEP: back to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" from "Node.Spec.ConfigSource.ConfigMap has invalid namespace"
I0623 16:08:44.538] STEP: from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" to "Node.Spec.ConfigSource.ConfigMap has invalid name"
I0623 16:08:44.538] STEP: back to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" from "Node.Spec.ConfigSource.ConfigMap has invalid name"
I0623 16:08:44.539] STEP: from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
... skipping 2 lines ...
I0623 16:08:44.540] I0623 15:44:36.015624    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.540] I0623 15:44:46.030413    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.540] I0623 15:44:46.077599    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.540] I0623 15:44:46.077624    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.540] I0623 15:44:47.080965    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.541] STEP: back to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" from "correct"
I0623 16:08:44.541] STEP: from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" to "fail-parse"
I0623 16:08:44.541] I0623 15:44:58.097220    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.541] I0623 15:44:58.142267    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.542] I0623 15:44:58.142290    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.542] I0623 15:44:59.144286    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.542] STEP: back to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" from "fail-parse"
I0623 16:08:44.542] STEP: from "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" to "fail-validate"
I0623 16:08:44.543] I0623 15:45:09.160458    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.543] I0623 15:45:09.204503    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.543] I0623 15:45:09.204526    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.543] I0623 15:45:10.205585    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.544] STEP: back to "Node.Spec.ConfigSource.ConfigMap.ResourceVersion is illegally specified" from "fail-validate"
I0623 16:08:44.544] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap has invalid namespace"
I0623 16:08:44.544] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid namespace" to "Node.Spec.ConfigSource.ConfigMap has invalid name"
I0623 16:08:44.545] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid namespace" from "Node.Spec.ConfigSource.ConfigMap has invalid name"
I0623 16:08:44.545] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid namespace" to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
I0623 16:08:44.545] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid namespace" from "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
I0623 16:08:44.545] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid namespace" to "correct"
I0623 16:08:44.546] I0623 15:45:20.221416    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.546] I0623 15:45:20.266199    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.546] I0623 15:45:20.266221    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.546] I0623 15:45:21.267878    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.547] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid namespace" from "correct"
I0623 16:08:44.547] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid namespace" to "fail-parse"
I0623 16:08:44.547] I0623 15:45:32.287129    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.547] I0623 15:45:32.331937    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.548] I0623 15:45:32.331960    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.548] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid namespace" from "fail-parse"
I0623 16:08:44.548] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid namespace" to "fail-validate"
I0623 16:08:44.548] I0623 15:45:33.333050    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.549] I0623 15:45:44.351135    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.549] I0623 15:45:44.396211    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.549] I0623 15:45:44.396243    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.549] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid namespace" from "fail-validate"
I0623 16:08:44.550] I0623 15:45:45.406736    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.550] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap has invalid name"
I0623 16:08:44.550] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid name" to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
I0623 16:08:44.550] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid name" from "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
I0623 16:08:44.551] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid name" to "correct"
I0623 16:08:44.551] I0623 15:45:56.421534    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.551] I0623 15:45:56.466004    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.551] I0623 15:45:56.466029    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.552] I0623 15:45:57.467596    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.552] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid name" from "correct"
I0623 16:08:44.552] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid name" to "fail-parse"
I0623 16:08:44.552] I0623 15:46:07.500601    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.553] I0623 15:46:07.558755    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.553] I0623 15:46:07.558778    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.553] I0623 15:46:08.560005    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.553] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid name" from "fail-parse"
I0623 16:08:44.554] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid name" to "fail-validate"
I0623 16:08:44.554] I0623 15:46:19.575717    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.554] I0623 15:46:19.619246    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.554] I0623 15:46:19.619268    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.555] I0623 15:46:20.620553    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.555] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid name" from "fail-validate"
I0623 16:08:44.555] STEP: setting initial state "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey"
I0623 16:08:44.555] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" to "correct"
I0623 16:08:44.556] I0623 15:46:30.635805    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.556] I0623 15:46:30.680234    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.556] I0623 15:46:30.680257    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.556] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" from "correct"
I0623 16:08:44.557] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" to "fail-parse"
I0623 16:08:44.557] I0623 15:46:31.681876    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.557] I0623 15:46:42.701885    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.557] I0623 15:46:42.746133    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.558] I0623 15:46:42.746157    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.558] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" from "fail-parse"
I0623 16:08:44.558] STEP: from "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" to "fail-validate"
I0623 16:08:44.558] I0623 15:46:43.747294    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.559] I0623 15:46:54.763431    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.559] I0623 15:46:54.815015    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.559] I0623 15:46:54.815031    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.559] I0623 15:46:55.816476    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.559] STEP: back to "Node.Spec.ConfigSource.ConfigMap has invalid kubeletConfigKey" from "fail-validate"
I0623 16:08:44.560] STEP: setting initial state "correct"
I0623 16:08:44.560] I0623 15:47:07.833208    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.560] I0623 15:47:07.877028    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.560] I0623 15:47:07.877051    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.561] I0623 15:47:08.878907    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.561] STEP: from "correct" to "fail-parse"
I0623 16:08:44.561] I0623 15:47:18.893342    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.561] I0623 15:47:18.937626    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.562] I0623 15:47:18.937651    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.562] I0623 15:47:19.939418    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.562] STEP: back to "correct" from "fail-parse"
I0623 16:08:44.562] I0623 15:47:29.954159    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.562] I0623 15:47:29.999429    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.563] I0623 15:47:29.999452    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.563] I0623 15:47:31.001881    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.563] STEP: from "correct" to "fail-validate"
I0623 16:08:44.563] I0623 15:47:42.020164    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.564] I0623 15:47:42.064217    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.564] I0623 15:47:42.064240    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.564] STEP: back to "correct" from "fail-validate"
I0623 16:08:44.564] I0623 15:47:43.066127    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.565] I0623 15:47:54.082031    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.565] I0623 15:47:54.126454    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.565] I0623 15:47:54.126477    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.565] STEP: setting initial state "fail-parse"
I0623 16:08:44.565] I0623 15:47:55.128772    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.566] I0623 15:48:06.145517    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.566] I0623 15:48:06.190141    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.566] I0623 15:48:06.190164    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.566] I0623 15:48:07.191206    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.567] STEP: from "fail-parse" to "fail-validate"
I0623 16:08:44.567] I0623 15:48:17.204757    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.567] I0623 15:48:17.248805    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.567] I0623 15:48:17.248827    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.568] I0623 15:48:18.250170    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.568] STEP: back to "fail-parse" from "fail-validate"
I0623 16:08:44.568] I0623 15:48:28.265352    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.568] I0623 15:48:28.308931    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.569] I0623 15:48:28.308953    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.569] I0623 15:48:29.310121    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.569] STEP: setting initial state "fail-validate"
I0623 16:08:44.569] I0623 15:48:39.324662    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.569] I0623 15:48:39.368371    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.570] I0623 15:48:39.368402    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.570] I0623 15:48:40.369480    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.570] [AfterEach] 
I0623 16:08:44.570]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
... skipping 213 lines ...
I0623 16:08:44.612] STEP: Collecting events from namespace "device-plugin-errors-2648".
I0623 16:08:44.612] I0623 15:54:39.500746    2494 util.go:247] new configuration has taken effect
I0623 16:08:44.612] STEP: Found 7 events.
I0623 16:08:44.613] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:49:36 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
I0623 16:08:44.613] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:49:36 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c
I0623 16:08:44.614] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:49:37 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c
I0623 16:08:44.614] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:49:38 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} BackOff: Back-off restarting failed container
I0623 16:08:44.614] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:54:38 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
I0623 16:08:44.615] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:54:38 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Created: Created container device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c
I0623 16:08:44.615] Jun 23 15:54:39.503: INFO: At 2021-06-23 15:54:39 +0000 UTC - event for device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c: {kubelet n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8} Started: Started container device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c
I0623 16:08:44.615] Jun 23 15:54:39.505: INFO: POD                                                      NODE                                                             PHASE    GRACE  CONDITIONS
I0623 16:08:44.616] Jun 23 15:54:39.505: INFO: device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c  n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:49:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:49:34 +0000 UTC ContainersNotReady containers with unready status: [device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:49:34 +0000 UTC ContainersNotReady containers with unready status: [device-plugin-test-4481f090-801e-464d-8c85-eb3c874aca0c]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-06-23 15:49:34 +0000 UTC  }]
I0623 16:08:44.616] Jun 23 15:54:39.505: INFO: 
I0623 16:08:44.616] Jun 23 15:54:39.507: INFO: 
I0623 16:08:44.616] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.625] Jun 23 15:54:39.508: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 4306 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:48:51 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:49:44 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/resource":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{},"f:example.com/resource":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-djqxx,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},example.com/resource: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},example.com/resource: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:54:37 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:54:37 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:54:37 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:54:37 +0000 UTC,LastTransitionTime:2021-06-23 15:54:37 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-device-plugin@sha256:3dd0413e5a78f1c2a6484f168ba3daf23ebb0b1141897237e9559db6c5f7101f k8s.gcr.io/e2e-test-images/sample-device-plugin@sha256:e84f6ca27c51ddedf812637dd2bcf771ad69fdca1173e5690c372370d0f93c40 k8s.gcr.io/e2e-test-images/sample-device-plugin:1.3],SizeBytes:41740418,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-djqxx,UID:cca3cb49-568f-48ba-8ba0-9d96a119c432,ResourceVersion:4296,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-djqxx,UID:cca3cb49-568f-48ba-8ba0-9d96a119c432,ResourceVersion:4296,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.626] Jun 23 15:54:39.509: INFO: 
I0623 16:08:44.626] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.626] Jun 23 15:54:39.510: INFO: 
I0623 16:08:44.627] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.627] Jun 23 15:54:39.513: INFO: sample-device-plugin started at 2021-06-23 15:49:24 +0000 UTC (0+1 container statuses recorded)
I0623 16:08:44.627] Jun 23 15:54:39.513: INFO: 	Container sample-device-plugin ready: true, restart count 0
... skipping 18 lines ...
I0623 16:08:44.631] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
I0623 16:08:44.631]   DevicePlugin
I0623 16:08:44.631]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/device_plugin_test.go:114
I0623 16:08:44.632]     Verifies the Kubelet device plugin functionality. [It]
I0623 16:08:44.632]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/device_plugin_test.go:122
I0623 16:08:44.632] 
I0623 16:08:44.632]     Unexpected error:
I0623 16:08:44.632]         <*errors.errorString | 0xc00027ac30>: {
I0623 16:08:44.632]             s: "timed out waiting for the condition",
I0623 16:08:44.632]         }
I0623 16:08:44.632]         timed out waiting for the condition
I0623 16:08:44.633]     occurred
I0623 16:08:44.633] 
... skipping 19 lines ...
I0623 16:08:44.636] [It] should set pids.max for Pod
I0623 16:08:44.636]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/pids_test.go:89
I0623 16:08:44.636] STEP: by creating a G pod
I0623 16:08:44.636] I0623 15:55:04.625426    2494 util.go:247] new configuration has taken effect
I0623 16:08:44.637] STEP: checking if the expected pids settings were applied
I0623 16:08:44.637] Jun 23 15:55:04.644: INFO: Pod to run command: expected=1024; actual=$(cat /tmp//kubepods.slice/kubepods-pod9e664557_34ad_4d36_b5a1_54c6e275542a.slice/pids.max); if [ "$expected" -ne "$actual" ]; then exit 1; fi; 
I0623 16:08:44.637] Jun 23 15:55:04.653: INFO: Waiting up to 5m0s for pod "pod02e0c014-f8c5-42ce-89aa-5a7abae418a4" in namespace "pids-limit-test-6184" to be "Succeeded or Failed"
I0623 16:08:44.638] Jun 23 15:55:04.665: INFO: Pod "pod02e0c014-f8c5-42ce-89aa-5a7abae418a4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.224861ms
I0623 16:08:44.638] Jun 23 15:55:06.670: INFO: Pod "pod02e0c014-f8c5-42ce-89aa-5a7abae418a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01679955s
I0623 16:08:44.638] Jun 23 15:55:08.673: INFO: Pod "pod02e0c014-f8c5-42ce-89aa-5a7abae418a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020105267s
I0623 16:08:44.638] STEP: Saw pod success
I0623 16:08:44.639] Jun 23 15:55:08.673: INFO: Pod "pod02e0c014-f8c5-42ce-89aa-5a7abae418a4" satisfied condition "Succeeded or Failed"
I0623 16:08:44.639] [AfterEach] With config updated with pids limits
I0623 16:08:44.639]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:175
I0623 16:08:44.639] I0623 15:55:13.259536    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.639] I0623 15:55:13.305913    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.639] I0623 15:55:13.305937    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.639] [AfterEach] [sig-node] PodPidsLimit [Serial]
... skipping 34 lines ...
I0623 16:08:44.644] I0623 15:55:36.457234    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.644] STEP: setting initial state "correct"
I0623 16:08:44.644] I0623 15:55:37.458982    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.644] I0623 15:55:47.474195    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.644] I0623 15:55:47.519224    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.644] I0623 15:55:47.519248    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.644] STEP: from "correct" to "fail-parse"
I0623 16:08:44.644] I0623 15:55:48.521049    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.645] I0623 15:55:58.536133    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.645] I0623 15:55:58.580391    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.645] I0623 15:55:58.580415    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.645] I0623 15:55:59.582555    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.645] STEP: back to "correct" from "fail-parse"
I0623 16:08:44.645] I0623 15:56:10.597404    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.645] I0623 15:56:10.641559    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.645] I0623 15:56:10.641584    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.646] I0623 15:56:11.642616    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.646] STEP: from "correct" to "fail-validate"
I0623 16:08:44.646] I0623 15:56:22.660448    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.646] I0623 15:56:22.704933    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.646] I0623 15:56:22.704956    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.646] STEP: back to "correct" from "fail-validate"
I0623 16:08:44.646] I0623 15:56:23.718928    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.646] I0623 15:56:33.736278    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.647] I0623 15:56:33.780908    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.647] I0623 15:56:33.780931    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.647] STEP: setting initial state "fail-parse"
I0623 16:08:44.647] I0623 15:56:34.831604    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.647] I0623 15:56:45.847004    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.647] I0623 15:56:45.897057    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.647] I0623 15:56:45.898292    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.648] I0623 15:56:46.900122    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.648] STEP: from "fail-parse" to "fail-validate"
I0623 16:08:44.648] I0623 15:56:57.916061    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.648] I0623 15:56:57.960908    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.648] I0623 15:56:57.960933    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.649] I0623 15:56:58.961965    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.649] STEP: back to "fail-parse" from "fail-validate"
I0623 16:08:44.649] I0623 15:57:09.993725    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.649] I0623 15:57:10.040724    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.649] I0623 15:57:10.040748    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.649] I0623 15:57:11.041750    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.649] STEP: setting initial state "fail-validate"
I0623 16:08:44.650] I0623 15:57:22.056450    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.650] I0623 15:57:22.100188    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.650] I0623 15:57:22.100212    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.650] I0623 15:57:23.101239    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.650] [AfterEach] 
I0623 16:08:44.650]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
... skipping 73 lines ...
I0623 16:08:44.661] I0623 15:58:12.897434    2494 util.go:247] new configuration has taken effect
I0623 16:08:44.661] STEP: Found 0 events.
I0623 16:08:44.661] Jun 23 15:58:12.902: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0623 16:08:44.661] Jun 23 15:58:12.902: INFO: 
I0623 16:08:44.661] Jun 23 15:58:12.904: INFO: 
I0623 16:08:44.661] Logging node info for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.669] Jun 23 15:58:12.905: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8    f8abf7bb-64d0-4a09-8d76-bda6f82ca588 4712 0 2021-06-23 14:06:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-06-23 14:06:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e_node.test Update v1 2021-06-23 15:57:34 +0000 UTC FieldsV1 {"f:spec":{"f:configSource":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{}}}}} } {kubelet Update v1 2021-06-23 15:57:46 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/resource":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{},"f:example.com/resource":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{".":{},"f:active":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}},"f:assigned":{".":{},"f:configMap":{".":{},"f:kubeletConfigKey":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qhm6m,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},example.com/resource: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7807873024 0} {<nil>} 7624876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20926410752 0} {<nil>} 20435948Ki BinarySI},example.com/resource: {{2 0} {<nil>} 2 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7545729024 0} {<nil>} 7368876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-06-23 15:58:10 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-06-23 15:58:10 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-06-23 15:58:10 +0000 UTC,LastTransitionTime:2021-06-23 14:06:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-06-23 15:58:10 +0000 UTC,LastTransitionTime:2021-06-23 15:57:59 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.44,},NodeAddress{Type:Hostname,Address:n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4e796a5e880bc4c48961313e1ae0f7f2,SystemUUID:4e796a5e-880b-c4c4-8961-313e1ae0f7f2,BootID:652e6d17-906d-40c8-b209-9babb77c0a87,KernelVersion:5.12.7-300.fc34.x86_64,OSImage:Fedora CoreOS 34.20210529.3.0,ContainerRuntimeVersion:cri-o://1.21.0,KubeletVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,KubeProxyVersion:v1.22.0-beta.0.29+3b2a5902bf90d3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:70283c77abb54f37e57cf4b838ca8978a66e6da3bd72c555696e0eaae1356b58 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1648681988,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl@sha256:dd475f8a8c579cb78a13f54342e8569e7f925c8b0ba3a5599dbc55c97a4a76f1 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:875791114,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster@sha256:83aae3701992f5ab15b9093bc73e77b43cf61e2522d7bf90d61dcb383b818b22 k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:352434302,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs@sha256:90af3b1795d2669a4a07d3a0fecbaa2ac920ef69b3c588e93423e74501793cdc k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:272582535,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd@sha256:cba7b71304b6369c0d5e1ea5e70631354b5824c7f75dbce9d63149af216efbeb k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:128894977,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost@sha256:ef11a0f696f3489a1684af5525419ac332df8682a148c6843b4da63c1503ee5b k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:126732584,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:100377317,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:55e2dc12800dbf891abc700ef3004acf08ec15cc0fab95634327c09fd6d097eb k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:99655908,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:5b9eab56404c721c2f193d7967b57a92339506dfdba37e496e48304ff172e5b4 k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:99654372,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:70377136,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot@sha256:93f8fe220940db5f92e1572e72b1457fc683ea3aebd24ac9474c6bca65660834 k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:43878048,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-device-plugin@sha256:3dd0413e5a78f1c2a6484f168ba3daf23ebb0b1141897237e9559db6c5f7101f k8s.gcr.io/e2e-test-images/sample-device-plugin@sha256:e84f6ca27c51ddedf812637dd2bcf771ad69fdca1173e5690c372370d0f93c40 k8s.gcr.io/e2e-test-images/sample-device-plugin:1.3],SizeBytes:41740418,},ContainerImage{Names:[gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372b],SizeBytes:33602447,},ContainerImage{Names:[docker.io/nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb docker.io/nfvpe/sriov-device-plugin:v3.1],SizeBytes:25603453,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx@sha256:ebf4de42b3d660133f6f7d0feddabe31a44d07ed55f59471fd2072b0d8e8afae k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:17245687,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils@sha256:d2a412b68cba0c952d98f837aeab5ab13e075dfbd78fcd183b76afa20de5bd3d k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:12250746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs@sha256:f6b1c4aef11b116c2a065ea60ed071a8f205444f1897bed9aa2e98a5d78cbdae k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7373984,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5502584,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:244bdbdf4b8d368b5836e9d2c7808a280a73ad72ae321d644e9f220da503218f k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1374910,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1319178,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause@sha256:369201a612f7b2b585a8e6ca99f77a36bcdbd032463d815388a96800b63ef2c8 k8s.gcr.io/pause:3.5],SizeBytes:689969,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qhm6m,UID:606667a9-6f8b-4e32-b539-b55bf666d41a,ResourceVersion:4701,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-qhm6m,UID:606667a9-6f8b-4e32-b539-b55bf666d41a,ResourceVersion:4701,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0623 16:08:44.669] Jun 23 15:58:12.906: INFO: 
I0623 16:08:44.670] Logging kubelet events for node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.670] Jun 23 15:58:12.907: INFO: 
I0623 16:08:44.670] Logging pods the kubelet thinks is on node n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:44.670] Jun 23 15:58:12.910: INFO: sample-device-plugin started at 2021-06-23 15:49:24 +0000 UTC (0+1 container statuses recorded)
I0623 16:08:44.670] Jun 23 15:58:12.910: INFO: 	Container sample-device-plugin ready: true, restart count 0
... skipping 17 lines ...
I0623 16:08:44.673]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:75
I0623 16:08:44.673]     
I0623 16:08:44.673]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:460
I0623 16:08:44.673]       should eventually evict all of the correct pods [BeforeEach]
I0623 16:08:44.673]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:475
I0623 16:08:44.673] 
I0623 16:08:44.673]       Unexpected error:
I0623 16:08:44.673]           <*exec.ExitError | 0xc000b41980>: {
I0623 16:08:44.674]               ProcessState: {
I0623 16:08:44.674]                   pid: 43415,
I0623 16:08:44.674]                   status: 256,
I0623 16:08:44.674]                   rusage: {
I0623 16:08:44.674]                       Utime: {Sec: 0, Usec: 25010},
... skipping 1172 lines ...
I0623 16:08:44.854] I0623 16:06:24.032990    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.854] I0623 16:06:24.896685    2494 util.go:247] new configuration has taken effect
I0623 16:08:44.854] I0623 16:06:25.040028    2494 server.go:182] Initial health check passed for service "kubelet"
I0623 16:08:44.854] I0623 16:06:26.040469    2494 server.go:222] Restarting server "kubelet" with restart command
I0623 16:08:44.855] I0623 16:06:26.084942    2494 server.go:171] Running health check for service "kubelet"
I0623 16:08:44.855] I0623 16:06:26.084967    2494 util.go:48] Running readiness check for service "kubelet"
I0623 16:08:44.855] W0623 16:06:27.085468    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.855] W0623 16:06:28.085900    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.856] W0623 16:06:29.086350    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.856] W0623 16:06:30.087397    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.856] W0623 16:06:31.087795    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.857] W0623 16:06:32.088633    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.857] W0623 16:06:33.090113    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.857] W0623 16:06:34.090504    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.857] W0623 16:06:35.092115    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.858] W0623 16:06:36.092603    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.858] W0623 16:06:37.093507    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.858] W0623 16:06:38.093969    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.859] W0623 16:06:39.094408    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.859] W0623 16:06:40.095255    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.860] W0623 16:06:41.095624    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.860] W0623 16:06:42.096060    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.860] W0623 16:06:43.096443    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.860] W0623 16:06:44.096867    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.861] W0623 16:06:45.097958    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.861] W0623 16:06:46.099177    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.861] W0623 16:06:47.099583    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.861] W0623 16:06:48.100021    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.862] W0623 16:06:49.100426    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.862] W0623 16:06:50.101439    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.862] W0623 16:06:51.101870    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.862] W0623 16:06:52.102343    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.863] W0623 16:06:53.102878    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.863] W0623 16:06:54.103415    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.863] W0623 16:06:55.104455    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.864] W0623 16:06:56.104930    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.864] W0623 16:06:57.105346    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.864] W0623 16:06:58.105781    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.865] W0623 16:06:59.106192    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.865] W0623 16:07:00.107219    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.865] W0623 16:07:01.108335    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.866] W0623 16:07:02.109410    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.866] W0623 16:07:03.109805    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.866] W0623 16:07:04.110884    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.867] W0623 16:07:05.111602    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.867] W0623 16:07:06.112231    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.867] W0623 16:07:07.112610    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.868] W0623 16:07:08.113457    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.868] W0623 16:07:09.113906    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.868] W0623 16:07:10.115206    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.869] W0623 16:07:11.116476    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.869] W0623 16:07:12.116912    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.869] W0623 16:07:13.117347    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.870] W0623 16:07:14.118452    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.870] W0623 16:07:15.119499    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.870] W0623 16:07:16.120198    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.870] W0623 16:07:17.120597    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.870] W0623 16:07:18.121004    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.871] W0623 16:07:19.121392    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.871] W0623 16:07:20.122175    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.871] W0623 16:07:21.122559    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.871] W0623 16:07:22.123064    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.872] W0623 16:07:23.123462    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.872] W0623 16:07:24.123957    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.872] Jun 23 16:07:24.910: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3549b1f6-dad3-4a9c-ad85-778a9eb5f763] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:24 GMT]] Body:0xc000908200 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001fdd6b0}
I0623 16:08:44.873] W0623 16:07:25.124975    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.873] W0623 16:07:26.125504    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.873] W0623 16:07:27.125947    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.873] W0623 16:07:28.127256    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.873] W0623 16:07:29.128439    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.874] Jun 23 16:07:29.922: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[994fde4b-88ec-45e6-a884-52bfa6c6da19] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:29 GMT]] Body:0xc000908640 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001fddc30}
I0623 16:08:44.874] W0623 16:07:30.128834    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.874] W0623 16:07:31.129546    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.875] W0623 16:07:32.130115    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.875] W0623 16:07:33.130522    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.875] W0623 16:07:34.130944    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.876] Jun 23 16:07:34.920: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf3cf970-2910-4d38-a156-139e79841d56] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:34 GMT]] Body:0xc000908a80 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc000efa210}
I0623 16:08:44.876] W0623 16:07:35.131502    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.876] W0623 16:07:36.132461    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.877] W0623 16:07:37.132947    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.877] W0623 16:07:38.133364    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.877] W0623 16:07:39.134029    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.878] Jun 23 16:07:39.921: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd7aa330-ef44-4ca2-b043-b9d4b6505bc2] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:39 GMT]] Body:0xc000908f80 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc000efa790}
I0623 16:08:44.878] W0623 16:07:40.134414    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.879] W0623 16:07:41.135489    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.879] W0623 16:07:42.136278    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.879] W0623 16:07:43.137603    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.879] W0623 16:07:44.138323    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.880] Jun 23 16:07:44.920: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2a95fef7-2db2-4432-8235-f9adcdb23a4c] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:44 GMT]] Body:0xc000909500 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc000efae70}
I0623 16:08:44.880] W0623 16:07:45.139327    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.881] W0623 16:07:46.140533    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.881] W0623 16:07:47.140952    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.881] W0623 16:07:48.141361    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.881] W0623 16:07:49.141771    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.882] Jun 23 16:07:49.920: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c9e66292-cae8-46c8-ae20-9a9dcaa11150] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:49 GMT]] Body:0xc0009099c0 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc000efb3f0}
I0623 16:08:44.882] W0623 16:07:50.142809    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.882] W0623 16:07:51.143510    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.883] W0623 16:07:52.143955    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.883] W0623 16:07:53.144468    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.883] W0623 16:07:54.144923    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.884] Jun 23 16:07:54.922: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f81b70c-d3fb-4452-becd-34709d3b0bf9] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:54 GMT]] Body:0xc000909e40 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc000efbad0}
I0623 16:08:44.884] W0623 16:07:55.145851    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.884] W0623 16:07:56.146615    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.885] W0623 16:07:57.147199    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.885] W0623 16:07:58.147567    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.885] W0623 16:07:59.148197    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.886] Jun 23 16:07:59.919: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f8e9530a-2ed6-40ae-b752-8779299ece1c] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:07:59 GMT]] Body:0xc000ef82c0 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c220b0}
I0623 16:08:44.886] W0623 16:08:00.149120    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.886] W0623 16:08:01.150510    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.887] W0623 16:08:02.150943    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.887] W0623 16:08:03.151385    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.887] W0623 16:08:04.151868    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.888] Jun 23 16:08:04.919: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e734d606-ff8c-4a77-9888-ced09f4b4f5a] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:08:04 GMT]] Body:0xc000ef8600 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c22630}
I0623 16:08:44.888] W0623 16:08:05.152354    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.888] W0623 16:08:06.152755    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.889] W0623 16:08:07.153477    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.889] W0623 16:08:08.153935    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.889] W0623 16:08:09.154426    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.890] Jun 23 16:08:09.921: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[41a35edf-c653-46ff-823c-88525cff6db1] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:08:09 GMT]] Body:0xc000ef89c0 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c22c60}
I0623 16:08:44.890] W0623 16:08:10.154901    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.890] W0623 16:08:11.155324    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.891] W0623 16:08:12.155850    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.891] W0623 16:08:13.156355    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.891] W0623 16:08:14.157194    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.892] Jun 23 16:08:14.921: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[325b40c4-75dc-45e0-a303-ce4d9fd347ba] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:08:14 GMT]] Body:0xc000ef8e40 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c231e0}
I0623 16:08:44.892] W0623 16:08:15.158191    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.892] W0623 16:08:16.159439    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.893] W0623 16:08:17.160436    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.893] W0623 16:08:18.160894    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.893] W0623 16:08:19.161242    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.894] Jun 23 16:08:19.920: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4435d285-668e-43fe-960c-50c747899910] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:08:19 GMT]] Body:0xc000ef9240 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c23810}
I0623 16:08:44.894] W0623 16:08:20.161688    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.894] W0623 16:08:21.162766    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.895] W0623 16:08:22.163203    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.895] W0623 16:08:23.164062    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.895] W0623 16:08:24.164506    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.896] Jun 23 16:08:24.922: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7e856ab-4203-44f9-9092-747d19abb473] Cache-Control:[no-cache, private] Content-Length:[209] Content-Type:[application/json] Date:[Wed, 23 Jun 2021 16:08:24 GMT]] Body:0xc000ef96c0 ContentLength:209 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0025eaf00 TLS:0xc001c23d90}
I0623 16:08:44.896] W0623 16:08:25.165247    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.896] W0623 16:08:26.165640    2494 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I0623 16:08:44.898] F0623 16:08:26.165684    2494 server.go:180] Restart loop readinessCheck failed for server "kubelet" start-command: `/usr/bin/systemd-run -p Delegate=true --unit=kubelet-20210623T140232.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20210623T140232/kubelet --kubeconfig /tmp/node-e2e-20210623T140232/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20210623T140232/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20210623T140232/cni/bin --cni-conf-dir /tmp/node-e2e-20210623T140232/cni/net.d --cni-cache-dir /tmp/node-e2e-20210623T140232/cni/cache --hostname-override n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 --container-runtime remote --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20210623T140232/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0`, kill-command: `/usr/bin/systemctl kill kubelet-20210623T140232.service`, restart-command: `/usr/bin/systemctl restart kubelet-20210623T140232.service`, health-check: [http://127.0.0.1:10255/healthz], output-file: "kubelet.log"
I0623 16:08:44.898] goroutine 228 [running]:
I0623 16:08:44.898] k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc002044a80, 0x54d, 0x973)
I0623 16:08:44.898] 	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
I0623 16:08:44.898] k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x8c18840, 0xc000000003, 0x0, 0x0, 0xc0009a4f50, 0x0, 0x738f4d5, 0x9, 0xb4, 0x0)
I0623 16:08:44.899] 	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
I0623 16:08:44.899] k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printf(0x8c18840, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x595d0a7, 0x29, 0xc0007d4b00, 0x1, ...)
... skipping 59 lines ...
I0623 16:08:44.912] k8s.io/kubernetes/test/e2e_node.setKubeletConfiguration(0xc000ad4f20, 0xc001196000, 0x0, 0x43ad5b)
I0623 16:08:44.912] 	_output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:207 +0x45
I0623 16:08:44.912] k8s.io/kubernetes/test/e2e_node.runTest.func1(0xc001196000, 0xc000ad4f20)
I0623 16:08:44.912] 	_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_container_manager_test.go:175 +0x45
I0623 16:08:44.912] panic(0x4b1ebc0, 0x60a7910)
I0623 16:08:44.913] 	/usr/local/go/src/runtime/panic.go:971 +0x499
I0623 16:08:44.913] k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc00215e1e0, 0xe2, 0xc00122ade8, 0x1, 0x1)
I0623 16:08:44.913] 	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
I0623 16:08:44.913] k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).match.func1(0x58a7db2, 0x9)
I0623 16:08:44.913] 	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:134 +0x373
I0623 16:08:44.914] k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).match(0xc001593b80, 0x61e0fd8, 0x8c48d78, 0x12a05f201, 0x0, 0x0, 0x0, 0x989680)
I0623 16:08:44.914] 	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:156 +0x411
I0623 16:08:44.914] k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).Should(0xc001593b80, 0x61e0fd8, 0x8c48d78, 0x0, 0x0, 0x0, 0x6188368)
... skipping 16419 lines ...
I0623 16:08:46.981] net/http.(*persistConn).writeLoop(0xc00151d680)
I0623 16:08:46.981] 	/usr/local/go/src/net/http/transport.go:2382 +0xf7
I0623 16:08:46.981] created by net/http.(*Transport).dialConn
I0623 16:08:46.981] 	/usr/local/go/src/net/http/transport.go:1744 +0xc9c
I0623 16:08:46.981] 
I0623 16:08:46.981] Ginkgo ran 1 suite in 2h5m41.522161206s
I0623 16:08:46.981] Test Suite Failed
I0623 16:08:46.981] 
I0623 16:08:46.981] Failure Finished Test Suite on Host n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8
I0623 16:08:46.982] command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.145.49.26 -- sudo sh -c 'cd /tmp/node-e2e-20210623T140232 && timeout -k 30s 25200.000000s ./ginkgo --nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-fedora-coreos-34-20210529-3-0-gcp-x86-64-002788d8 --report-dir=/tmp/node-e2e-20210623T140232/results --report-prefix=fedora --image-description="fedora-coreos-34-20210529-3-0-gcp-x86-64" --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"'] failed with error: exit status 1
I0623 16:08:46.983] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0623 16:08:46.983] <                              FINISH TEST                               <
I0623 16:08:46.983] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0623 16:08:46.983] 
I0623 16:08:46.983] Failure: 1 errors encountered.
W0623 16:08:47.083] exit status 1
... skipping 11 lines ...
I0623 16:08:47.434] Sourcing kube-util.sh
I0623 16:08:47.434] Detecting project
I0623 16:08:47.434] Project: k8s-jkns-pr-node-e2e
I0623 16:08:47.435] Network Project: k8s-jkns-pr-node-e2e
I0623 16:08:47.435] Zone: us-west1-b
I0623 16:08:47.435] Dumping logs from master locally to '/workspace/_artifacts'
W0623 16:08:48.460] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0623 16:08:48.461]  - The resource 'projects/k8s-jkns-pr-node-e2e/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found
W0623 16:08:48.461] 
W0623 16:08:48.643] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0623 16:08:48.744] Master not detected. Is the cluster up?
I0623 16:08:48.744] Dumping logs from nodes locally to '/workspace/_artifacts'
I0623 16:08:48.744] Detecting nodes in the cluster
... skipping 4 lines ...
W0623 16:08:53.788] NODE_NAMES=
W0623 16:08:53.791] 2021/06/23 16:08:53 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 6.559873034s
W0623 16:08:53.792] 2021/06/23 16:08:53 node.go:53: Noop - Node Down()
W0623 16:08:53.792] 2021/06/23 16:08:53 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0623 16:08:53.793] 2021/06/23 16:08:53 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0623 16:08:54.437] 2021/06/23 16:08:54 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 643.732783ms
W0623 16:08:54.438] 2021/06/23 16:08:54 main.go:327: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=7h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1-serial.yaml: exit status 1]
W0623 16:08:54.449] Traceback (most recent call last):
W0623 16:08:54.450]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module>
W0623 16:08:54.459]     main(parse_args())
W0623 16:08:54.460]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main
W0623 16:08:54.460]     mode.start(runner_args)
W0623 16:08:54.460]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start
W0623 16:08:54.460]     check_env(env, self.command, *args)
W0623 16:08:54.461]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0623 16:08:54.461]     subprocess.check_call(cmd, env=env)
W0623 16:08:54.461]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0623 16:08:54.461]     raise CalledProcessError(retcode, cmd)
W0623 16:08:54.462] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project=k8s-jkns-pr-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0" --extra-log="{\\"name\\": \\"crio.log\\", \\"journalctl\\": [\\"-u\\", \\"crio\\"]}"', '--node-tests=true', '--test_args=--nodes=1 --focus="\\[Serial\\]" --skip="\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeAlphaFeature:.+\\]"', '--timeout=420m', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1-serial.yaml')' returned non-zero exit status 1
E0623 16:08:54.502] Command failed
I0623 16:08:54.502] process 560 exited with code 1 after 134.8m
E0623 16:08:54.503] FAIL: pull-kubernetes-node-kubelet-serial-crio-cgroupv1
I0623 16:08:54.541] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0623 16:08:55.447] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0623 16:08:55.611] process 59478 exited with code 0 after 0.0m
I0623 16:08:55.611] Call:  gcloud config get-value account
I0623 16:08:56.477] process 59491 exited with code 0 after 0.0m
I0623 16:08:56.478] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0623 16:08:56.479] Upload result and artifacts...
I0623 16:08:56.479] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/102508/pull-kubernetes-node-kubelet-serial-crio-cgroupv1/1407698145726435328
I0623 16:08:56.480] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/102508/pull-kubernetes-node-kubelet-serial-crio-cgroupv1/1407698145726435328/artifacts
W0623 16:08:58.992] CommandException: One or more URLs matched no objects.
E0623 16:08:59.664] Command failed
I0623 16:08:59.665] process 59504 exited with code 1 after 0.1m
W0623 16:08:59.665] Remote dir gs://kubernetes-jenkins/pr-logs/pull/102508/pull-kubernetes-node-kubelet-serial-crio-cgroupv1/1407698145726435328/artifacts not exist yet
I0623 16:08:59.666] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/102508/pull-kubernetes-node-kubelet-serial-crio-cgroupv1/1407698145726435328/artifacts
I0623 16:09:07.924] process 59651 exited with code 0 after 0.1m
I0623 16:09:07.925] Call:  git rev-parse HEAD
I0623 16:09:07.929] process 60204 exited with code 0 after 0.0m
... skipping 20 lines ...